<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font color="#000066">Hi Anna,<br>
<br>
Thanks for your questions. I have also added some responses below.<br>
</font><br>
<font color="#000066">Andy</font><br>
<br>
<div class="moz-cite-prefix">On 5/12/2014 8:16 AM, Wayne Marcus GETZ
wrote:<br>
</div>
<blockquote
cite="mid:CALL64THza+_HKvDsraRWWUfSN15p_8hVOT1gU88ybC3MQ7s2WQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>Hi Anna:<br>
<br>
</div>
Here is my response<br>
<div>
<div>
<div class="gmail_extra"><br>
<br>
<div class="gmail_quote">On Mon, May 12, 2014 at 6:56 AM,
Anna Schweiger <span dir="ltr"><<a
moz-do-not-send="true"
href="mailto:anna.schweiger@nationalpark.ch"
target="_blank">anna.schweiger@nationalpark.ch</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div link="blue" vlink="purple" lang="DE-CH">
<div>
<p class="MsoNormal"><span lang="EN-US">Dear
T-LoCoH group, dear Andy</span></p>
<p class="MsoNormal"><span lang="EN-US">First of
all: I want to thank Andy and his colleagues
for the great effort you are putting into
T-LoCoH! I have started to use the package
some weeks ago and I have to say, that I’ve
hardly ever came across such a well explained
tutorial (everything worked right from the
start!)! Your publication is also really
helpful and easy to follow and so are the R
help files. Thank you so much! Good
documentations make life a lot easier (and
work fun)!!! </span></p>
<p class="MsoNormal"><span lang="EN-US">However, I
have a couple of questions I could not figure
out myself. Maybe someone has some ideas on
the following:</span></p>
<p style="margin-left:18pt"><span lang="EN-US"><span>1.<span
style="font:7pt "Times New
Roman""> </span></span></span><span
lang="EN-US">The first is a methodological
question: I’m comparing the feeding areas of
ibex and chamois in the summers of four
consecutive years in one valley where they
co-occur. For each year I have several (1-7)
individuals per species. My assumptions are
that individuals of the same species behave
more similar then individuals of different
species. In a first step, I chose the same
values for s and a (I use the “a” method) for
all individuals of the same species, across
the years; i.e. all ibex have the same s and a
value and all chamois another s and a value.
However, I could also argue that the
individuals of one species behave more similar
in the same year than in the other years
(maybe because of environmental variability).
Therefore, I was wondering if selecting
different s and a values for every year makes
sense? In the end I’m extracting environmental
data based on the polygons defined by what can
be called “core feeding areas” (I select them
based on duration of stay and number of
separate visits). Then I compare the two
species in GLMMs. So I’m basically pooling all
ibex data (different ind, different years) and
compare them to all chamois. I can account for
the individual and yearly differences by
including animal ID and year as a random
effect. Still, I believe the parameters of all
individuals from one species should be
somewhat comparable. So far I could not quite
get my head around this problem: Should I
choose only one s and a value per species, or
maybe only one for both species, or is it
possible to vary s and a per year or even per
individual? Do you have any suggestions? For
me this is really tricky.</span></p>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>You need to use the same s and a values for all
species. However, you can ask the question, how
robust is my result to variations in a and s. Thus
you could see if your result holds up for all a and s
or breaks down as these change. If it does break
down, then this break down might have some significant
implications because the implication might be that
differences emerge or disappear, as the case may be,
when time is given more or less weighting <br>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
<font color="#000066">I can see your quandary. I agree you need to
be consistent across individuals or datasets that you categorize
in one group for subsequent parts of the analysis. One way to be
consistent is to use the same value of 's' and 'a', another way to
be consistent is to use the same process for selecting 's' and
'a'. You've stumbled on one of T-LoCoH's functional limitations -
there isn't a magic formula for finding the best 's' or 'a' (this
can also be seen as a strength, because it keeps the analyst and
system knowledge in the picture). An alternative way for selecting
a value of 's', that you can apply consistently and fairly easily
across all data sets, is to pick the 's' value that returns the
same proportion of time selected hulls. This is a reasonable and
defensible way of selecting s when you consider that the term </font><font
color="#000066"><font color="#000066">in the time scaled distance
equation </font>that 's' controls is the essentially the
distance the individual could have traveled in a given time period
if it had been travelling at the maximum observed speed. In other
words, two data sets that are in many ways similar (same species,
same type of behavior) could have different values of 's' for the
same time-distance balance, because maximum observed speed of a
dataset is affected by sampling as well as behavior. I am
currently of the belief that the best way to get a consistent
balancing of space-time across data sets is to pick 's' that
corresponds to a consistent proportion of time selected hulls
(which will probably result in very close 's' values in absolute
terms, we need to do some more work in this area).<br>
<br>
The same principle applies for selecting 'a' across datasets that
should be analyzed in a similar manner - use the same value or the
same process. If you define the optimal 'a' as the one that fills
spurious holes in core areas, and minimizes cross-overs in areas
where the animal wasn't seen (which are the principles we present
in the paper), you'll probably find the same value of 'a' will do
that pretty well across similar datasets (this again however
presumes the data sampling is consistent, if sampling is greater
in one dataset you'll obviously need larger values of 'a' for the
same size hulls because it is a cumulative distance). <br>
<br>
Hope this helps. Let us know how it goes because parameter
selection is one of the most common challenges when trying to
create comparable space use models for different datasets (which
all methods have to contend with).<br>
</font><br>
<blockquote
cite="mid:CALL64THza+_HKvDsraRWWUfSN15p_8hVOT1gU88ybC3MQ7s2WQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div class="gmail_extra">
<div class="gmail_quote">
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div link="blue" vlink="purple" lang="DE-CH">
<div>
<p style="margin-left:18pt"><span lang="EN-US"></span></p>
<p class="MsoNormal"><span lang="EN-US">My other
questions are more technical: </span></p>
<p class="MsoNormal"><span lang="EN-US"> </span></p>
<p style="margin-left:18pt"><span lang="EN-US"><span>2.<span
style="font:7pt "Times New
Roman""> </span></span></span><span
lang="EN-US">I want to manually offset
duplicate xy points in xyt.lxy. Is this
possible? I want to avoid random offset when
constructing hulls, to make the analysis
repeatable. Maybe the explanation is somewhere
in the help, but I couldn’t find it…</span></p>
</div>
</div>
</blockquote>
<div>Since time is unique, I don't see how you can have
overlapping points unless they are true duplicates.
Such duplicates must be removed. So I am not sure I
understand your question.<br>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
<font color="#000066">Duplicate locations with identical time stamps
are
usually a data processing error (which is why the function that
creates a LoCoH-xy object from a series of locations checks for
that). Duplicate locations with different time stamps are usually
not an error, but the result of the animal resting, returning to
the same spot, or a rounding issue. But even when the time stamps
are different, duplicate locations can still be an issue with
T-LoCoH because you need unique locations to make a polygon. <br>
<br>
The current version of T-LoCoH handles duplicate locations when
creating hulls. There are two options based on the value of the<tt>
offset.dups</tt> parameter: ignoring them (at the risk of some
points not having enough unique neighbors to draw a polygon), or
randomly offsetting them by a fixed amount (the default). (The
original LoCoH package had a third option, deleting them, which
didn't seem like a good idea so it was removed). As an aside, we
have discussed the possibility of adding another option in future
versions of T-LoCoH, whereby different rules could be used to
construct hulls around duplicate locations, for example
constructing a circle with a fixed radius representing the size of
the nest, water hole, etc. This would require some apriori
knowledge of behavior when the animal is observed at the same
location multiple times. If anyone has thoughts about this please
let me know.<br>
<br>
The current version of T-LoCoH has an option to add a random
offset
(fixed distance, random direction) to duplicate locations when
constructing hulls. This is a problem, as Anna points out, for
reproducibility, because every time you construct hulls (and
subsequently isopleths), the duplicate locations will be offset
somewhat differently. An alternative approach is to randomly
offset the duplicate locations before constructing hulls (i.e., in
the Locoh-xy object). This should be done in full recognition that
you are effectively altering the input data (which may be fine for
home range construction, but for other analyses you would probably
want to use the original locations). There is not a function in
the T-LoCoH package to randomly offset duplicate locations in a
LoCoH-xy object (I've added this to my to-do list). It can be done
with the following commands:<br>
<br>
## These commands illustrate how to apply a random offset to <br>
## duplicate locations in LoCoH-xy object. Note that this <br>
## can not be undone, and any subsequent analysis or
construction<br>
## of hulls will be based on the altered data. <br>
<br>
## Given a LoCoH-xy object called <tt>fredo.lxy</tt><br>
<br>
## Get the coordinates of the Locoh-xy object<br>
xy <- coordinates(fredo.lxy$pts)<br>
<br>
## Identify the duplicate rows<br>
dup_idx <- duplicated(x)<br>
<br>
## See how many locations are duplicate<br>
table(dup_idx)<br>
<br>
## Define the amount that duplicate locations will be randomly
offset<br>
## This is in map units.<br>
offset <- 1<br>
<br>
## Apply a random offset to the duplicate rows<br>
theta <- runif(n=sum(dup_idx), min=0, max=2*pi)<br>
xy[dup_idx,1] <- xy[dup_idx,1] + offset * cos(theta)<br>
xy[dup_idx,2] <- xy[dup_idx,2] + offset * sin(theta)<br>
<br>
## See if there are any more duplicate rows. (Should all be
false)<br>
table(duplicated(xy))<br>
<br>
## Next, we create a new SpatialPointsDataFrame by<br>
## i. Grabbing the attribute table of the existing locations<br>
## ii. Assigning the new locations (with offsets) as the
locations<br>
<br>
pts_df <- fredo.lxy$pts@data<br>
coordinates(pts_df) <- xy<br>
fredo.lxy$pts <- pts_df<br>
<br>
## The nearest neighbor lookup table is no longer valid and
will need to be <br>
## recreated. Likewise with the ptsh (proportion of time
selected hulls v. s) table.<br>
## Set these to null<br>
fredo.lxy$nn <- NULL<br>
fredo.lxy$ptsh <- NULL<br>
<br>
## Lastly, we need to recreate the movement parameters.<br>
fredo.lxy <- lxy.repair(fredo.lxy)<br>
<br>
## Should be done. Inspect results<br>
summary(fredo.lxy)<br>
plot(fredo.lxy)</font><br>
<br>
<blockquote
cite="mid:CALL64THza+_HKvDsraRWWUfSN15p_8hVOT1gU88ybC3MQ7s2WQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div link="blue" vlink="purple" lang="DE-CH">
<div>
<p style="margin-left:18pt"><span lang="EN-US"></span></p>
<p style="margin-left:18pt"><span lang="EN-US"><span>3.<span
style="font:7pt "Times New
Roman""> </span></span></span><span
lang="EN-US">I’m resampling my data by using
lxy.thin.byfreq (common sampling interval
should be 4h, some individuals have 2h, some
10 min frequencies). Now, I have some cases
with time gaps of about 1 month. I would still
like to include these data. Is it possible to
split the data and include the two time
periods separately? Can this be done by
setting a value for tct in the auto.a method?
I don’t quite understand how tct works. </span></p>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<font color="#000066">I don't fully understand this question. </font><span
lang="EN-US"><font color="#000066"> lxy.thin.byfreq will
selectively remove locations to get as close as it can to the
desired sampling interval. If the desired sampling interval is 4
hours, and there is a gap of 30 days, it won't remove the points
on either end of the gap. It will only remove points where
sampling interval is higher than the desired interval. If you're
seeing a different effect with lxy.thin.byfreq let me know. The
tct argument in auto.a() function is very different, it acts a
filter for identifying which points should be used in the
computation of 'a' (doesn't remove any points). <br>
</font><br>
</span>
<blockquote
cite="mid:CALL64THza+_HKvDsraRWWUfSN15p_8hVOT1gU88ybC3MQ7s2WQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div class="gmail_extra">
<div class="gmail_quote">
<div> </div>
<div>Andy will have to explain how this works. <br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div link="blue" vlink="purple" lang="DE-CH">
<div>
<p style="margin-left:18pt"><span lang="EN-US"></span></p>
<p style="margin-left:18pt"><span lang="EN-US"><span>4.<span
style="font:7pt "Times New
Roman""> </span></span></span><span
lang="EN-US">Again about resampling: As
recommended in the help I thin bursts before
resampling the data to a common time interval.
I was wondering if the following is correct:
First I inspect the sampling frequency plot
with lxy.plot.freq. Then I thought: When
tau.diff.max (default) = 0.02 and tau
(median)=120 min, sampling frequencies between
117.6 - 122.4 should be fine. If I now see
points in the plot with let’s say delta t/tau
= 0.95, then sampling frequency= 0.95*120= 108
min which is outside the range of
tau.diff.max. In that case, should I set the
threshold value in lxy.thin.bursts to
thresh=0.98, to make sure all remaining points
fall within the range 117.6 - 122.4? I think
that having a sampling interval of 108 min in
a dataset that should have 120 min is not
uncommon and normally I would not think it is
problematic. But I have only a very vague idea
about the effects of such data intervals when
the algorithms start working. Is it possible
to provide any guidelines on thresholds for
thinning bursts? </span></p>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
<font color="#000066">I can see how that can be confusing. The <span
lang="EN-US">tau.diff.max argument in the lxy.thin.bursts()
function actually has nothing to do with how points within "a
burst" are removed (which in this context refers to a </span><span
lang="EN-US"><span lang="EN-US">series of locations spaced
closely in time and presumed to be an error or artifact of a
hyperactive GPS recording device)</span>, or how it identifies
what group of points constitutes a burst. </span><span
lang="EN-US">The <span lang="EN-US">tau.diff.max </span>argument
is used downstream, after points have been removed, in computing
the movement parameters for the trajectory as a whole. The only
argument which </span><span lang="EN-US"><span lang="EN-US">lxy.thin.bursts()
</span>defines a 'burst' is the <tt>thresh</tt> argument. </span>If
the median sampling interval of the data (not the desired
interval, but the actual interval), is 120 minutes, and thresh =
0.05 (i.e., 6 minutes), then any pair of points sampled within 6
minutes of each other or less are considered to be part of a
burst, and will be thinned down to a single point. <br>
<br>
Note if your ultimate goal is to resample the data to achieve a
uniform sampling interval, thinning out bursts may not be
necessary, the lxy.thin.byfreq() will do the same. The
lxy.thin.byfreq() function basically lays down a time line of
desired sampling times, based on the desired sampling interval,
and then grabs the closest point in time to each one. It's a
little more complicated than that but that's the gist of it.<br>
<br>
I should also note for reference that the way the term 'burst' in
used T-LoCoH is quite different than how the term is used in the
Move package and other movement analysis packages. <br>
</font><br>
<br>
<blockquote
cite="mid:CALL64THza+_HKvDsraRWWUfSN15p_8hVOT1gU88ybC3MQ7s2WQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div class="gmail_extra">
<div class="gmail_quote">
<div> Again, Andy will have to explain how this works.
<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div link="blue" vlink="purple" lang="DE-CH">
<div>
<p style="margin-left:18pt"><span lang="EN-US"></span></p>
<p style="margin-left:18pt"><span lang="EN-US"><span>5.<span
style="font:7pt "Times New
Roman""> </span></span></span><span
lang="EN-US">And related to the question
above: Should I check and thin burst again
after resampling to a new time interval (with
the new range of tau values?)?</span></p>
</div>
</div>
</blockquote>
<br>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<font color="#000066">After you resample to a uniform sampling
interval, you shouldn't any sampling intervals substantially
smaller than that (i.e., a burst, see above)<br>
</font><br>
<blockquote
cite="mid:CALL64THza+_HKvDsraRWWUfSN15p_8hVOT1gU88ybC3MQ7s2WQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div link="blue" vlink="purple" lang="DE-CH">
<div>
<p style="margin-left:18pt"> <span lang="EN-US"></span></p>
<p style="margin-left:18pt"><span lang="EN-US"><span>6.<span
style="font:7pt "Times New
Roman""> </span></span></span><span
lang="EN-US">Generally, it is a bit hard for
me to choose parameters based on visual
interpretation (s, a, delta/tau etc. ). So far
I came to the conclusion that this is the best
I can do. However, I was wondering if there
are any general arguments to support the
choices one makes based on visual
interpretation. Do you have an opinion on
this? How could you argue (I’m thinking about
future referees…)?</span></p>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
<font color="#000066">Are you still speaking in terms of resampling?
The graphs of sampling intervals, for example, are of course a
guide. If you have a basis or theory for quantifying more
rigorously the rules or criteria you want for resampling that
would certainly be another way to do it. You can extract pretty
much any statistical summaries you want from movement data. <br>
</font><br>
<blockquote
cite="mid:CALL64THza+_HKvDsraRWWUfSN15p_8hVOT1gU88ybC3MQ7s2WQ@mail.gmail.com"
type="cite">
<div dir="ltr">
<div>
<div>
<div class="gmail_extra">
<div class="gmail_quote">
<div> </div>
<div>There are arguments that one can use to justify one
choice over another. These are based on entropy
concepts, but we have yet to discuss or implement
these methods. So I cannot be more specific at this
time. <br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div link="blue" vlink="purple" lang="DE-CH">
<div>
<p style="margin-left:18pt"><span lang="EN-US"></span></p>
<p class="MsoNormal"><span lang="EN-US">I think
that’s it for the moment. I would really
appreciate any help or comments! </span></p>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>Good luck and all the best<br>
<br>
wayne<br>
<br>
<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px
0px 0.8ex;border-left:1px solid
rgb(204,204,204);padding-left:1ex">
<div link="blue" vlink="purple" lang="DE-CH">
<div>
<p class="MsoNormal"><span lang="EN-US"></span></p>
<p class="MsoNormal"><span lang="EN-US"> </span></p>
<p class="MsoNormal"><span lang="EN-US">All the
best, </span></p>
<p class="MsoNormal"><span lang="EN-US"> </span></p>
<p class="MsoNormal"> <span lang="EN-US">Anna </span></p>
<p class="MsoNormal"><span lang="EN-US"> </span></p>
<p class="MsoNormal"><span lang="EN-US">P.S.: I’m
not sure if this helps, but I think I came
across some typos in the R help file. Just in
case somebody is collecting them: </span></p>
<p class="MsoNormal"><span lang="EN-US">xyt.lxy:
To disable the checking for duplicate time
stamps, pass dup.dt.check=<span
style="background:none repeat scroll 0% 0%
lime">TRUE.</span> </span></p>
<p class="MsoNormal"> <span lang="EN-US">lxy.thin.bursts
{tlocoh}: To identify whether there are bursts
in a LoCoH-xy dataset, and the sampling
frequency of those bursts (i.e., the value ...
</span><span style="background:none repeat
scroll 0% 0% lime">TBC</span><span
lang="EN-US"></span></p>
<p class="MsoNormal"> THANKS FOR FINDING THE
TYPOS. I'M SURE THERE ARE MORE, PLEASE KEEP
SENDING THEM TO ME. <br>
</p>
<p class="MsoNormal"><span
style="font-family:Consolas" lang="EN-US"> </span></p>
<p class="MsoNormal"><span
style="font-size:9pt;font-family:"Verdana","sans-serif""
lang="EN-US">*************************************************<br>
<br>
</span><span
style="font-size:13.5pt;font-family:Webdings;color:rgb(0,127,0)"
lang="DE">P</span><span
style="font-size:13.5pt;color:rgb(0,127,0)"
lang="EN-US"> </span><span
style="font-size:7.5pt;font-family:"Verdana","sans-serif";color:rgb(0,127,0)"
lang="EN-US">Please consider the environment
before printing this email.</span><span
style="font-family:Consolas" lang="EN-US"></span></p>
<p class="MsoNormal"><span lang="EN-US"> </span></p>
</div>
</div>
<br>
_______________________________________________<br>
Tlocoh-info mailing list<br>
<a moz-do-not-send="true"
href="mailto:Tlocoh-info@lists.r-forge.r-project.org">Tlocoh-info@lists.r-forge.r-project.org</a><br>
<a moz-do-not-send="true"
href="http://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/tlocoh-info"
target="_blank">http://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/tlocoh-info</a><br>
<br>
</blockquote>
</div>
<br>
<br clear="all">
<br>
-- <br>
<div dir="ltr">__________________________________________________________<br>
___________________________________________________________<br>
<br>
Professor Wayne M. Getz<br>
A. Starker Leopold Professor of Wildlife Ecology<br>
Department Environmental Science Policy & Management<br>
130 Mulford Hall<br>
University of California at Berkeley<br>
CA 94720-3112, USA<br>
<br>
Campus Visitors: My office is in 5052 VLSB<br>
<br>
Fax: ( (1-510) 666-2352<br>
Office: (1-510) 642-8745<br>
Lab: (1-510) 643-1227<br>
email: <a moz-do-not-send="true"
href="mailto:wgetz@berkeley.edu" target="_blank">wgetz@berkeley.edu</a><br>
lab: <a moz-do-not-send="true"
href="http://www.CNR.Berkeley.EDU/%7Egetz/"
target="_blank">http://www.CNR.Berkeley.EDU/~getz/</a><br>
___________________________________________________________<br>
___________________________________________________________<br>
<br>
</div>
</div>
</div>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
Tlocoh-info mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Tlocoh-info@lists.r-forge.r-project.org">Tlocoh-info@lists.r-forge.r-project.org</a>
<a class="moz-txt-link-freetext" href="http://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/tlocoh-info">http://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/tlocoh-info</a>
</pre>
</blockquote>
<br>
</body>
</html>