[tlocoh-info] parameter selection, resampling, thin bursts

Andy Lyons lyons.andy at gmail.com
Fri May 16 19:18:28 CEST 2014


Hi Anna,

You're quite right, the code I sent you didn't take into account a 
Locoh-xy object that has multiple individuals. Duplicate locations for 
different individuals are of course fine and don't need to be offset 
because presumably you'll be constructing hulls for each individual 
separately. Hence the code I sent you and the summary() function report 
different numbers of duplicate locations.

The code below has been modified to only offset duplicate locations for 
the same individual. Hope it works, and let me know if you have any 
difficulties or questions (all of which have been really helpful BTW).

Andy

ps: the error you were getting about the CRS being different is due to 
an annoying glitch (I believe) in the sp package where by a leading 
space gets inserted the proj4args string. I have it on my to do list to 
try figure out when/why it does this so I can trap the problem and 
remove the space which causes problems if you try later to merge the 
SpatialPointsDataFrames


## These commands illustrate how to apply a random offset to
## duplicate locations for the same individual in a LoCoH-xy object.
## Note that this can not be undone, and any subsequent analysis
## or constructionof hulls will be based on the modified data.

## Given a LoCoH-xy object called fredo.lxy

## Create a matrix containing the id, x-coord, and y-coord
id_xy  <- cbind(fredo.lxy$pts[["id"]], coordinates(fredo.lxy$pts))

## Identify the duplicate rows
dup_idx <- duplicated(id_xy)

## See how many locations are duplicate (all ids combined)
table(dup_idx)

## Define the amount that duplicate locations will be randomly offset
## This is in map units (e.g., meters)
offset <- 1

## Apply a random offset to the duplicate rows
theta <- runif(n=sum(dup_idx), min=0, max=2*pi)
id_xy[dup_idx,2] <- id_xy[dup_idx,2] + offset * cos(theta)
id_xy[dup_idx,3] <- id_xy[dup_idx,3] + offset * sin(theta)

## See if there are any more duplicate rows. (Should all be false)
table(duplicated(id_xy))

## Next, we create a new SpatialPointsDataFrame by
## i. Grabbing the attribute table of the existing locations
## ii. Assigning the new locations (with offsets) as the locations

pts_df <- fredo.lxy$pts at data
coordinates(pts_df) <- id_xy[, c(2,3)]
fredo.lxy$pts <- pts_df

## The nearest neighbor lookup table is no longer valid and will need to be
## recreated. Likewise with the ptsh (proportion of time selected hulls 
v. s) table.
## Set these to null
fredo.lxy$nn <- NULL
fredo.lxy$ptsh <- NULL

## Lastly, we need to recreate the movement parameters.
fredo.lxy <- lxy.repair(fredo.lxy)

## Should be done. Inspect results
summary(fredo.lxy)
plot(fredo.lxy)






On 5/15/2014 2:14 PM, Anna Schweiger wrote:
> Hi Andy and Wayne
>
> Thanks a lot for your answers and advice. I learned a lot!
>
> I’ll probably try both options for parameter selection and let you
> know if I notice any interesting effects. So far I tried the “same
> method” option, (i.e. I chose individual S values to select 60% time
> selected hulls for each ID )and noticed that the optimal A values
> (accordings to lhs.plot.isoarea, lhs.plot.isoarea and plotting the
> hulls) were really quite similar, so it was reasonable to choose 1 A
> value for all my IDs.
>
> Thanks also for the code for offsetting duplicate points! However,
> when I used it for an object with several ids, the code found more
> duplicate locations than I did with the summary function:
>> summary(all_gams.lxy)
> Summary of LoCoH-xy object: all_gams.lxy
> ***Locations
>        id num.pts dups
>       528     252    0
>       611     223    0
>       655      90    0
>       643     235    0
>    620_11     243    1
>    620_12     231    1
>
>> xy  <- coordinates(all_gams.lxy$pts)
>> dup_idx <- duplicated(xy)
>> table(dup_idx)
> dup_idx
> FALSE  TRUE
>  1269     5
>
> I couldn’t figure out yet why this is the case so I chose to offset
> the points for 620_11 and 620_12 separately by using subsets. test1
> <- lxy.subset (all_gams.lxy, id="620_12"). In the subsets, duplicated
> (xy) found only one duplicate per id (as in summary). Then, however
> another issues came up, when I tried to merge the objects back
> together lxy.merge(all_gams2.lxy,test1.lxy). I got the following
> error:
> Error: identicalCRS(dots) is not TRUE
> Seems like some problem with the coodinate system, that was not fixed
> with lxy.repair? I worked my way around it by recreating the lxy
> object, using the following instead instead of lxy.repair
> test1.lxy <- xyt.lxy(xy=xy, dt=pts_df$dt, id=pts_df$id,
> proj4string=CRS ("+proj=somerc +lat_0=46.95240555555556
> +lon_0=7.439583333333333 +x_0=600000 +y_0=200000 +ellps=bessel
> +units=m +no_defs"),req.id=F, show.dup.dt=TRUE)
> That’s now fine for me. Still, it would interest my why duplicated
> (xy) finds different cases depending on whether I use a lxy object
> with several ids or a lxy object with just one id. I will try to find
> it out…
>
> Regarding your idea about constructing hulls around duplicate locations
> (e.g. a circle indicating a water hole), I think it might be quite a
> challenge to correctly identify “real” duplicates due GPS accuracy. In
> my study area (mountains) I tested the accuracy of the GPS devices and
> found errors of about 8 m in open terrain and around 15 m in
> the forest. So, if I have 2 point with the same xy coordinates I
> cannot tell if the animal was actually in the same location, likewise
> if the xy points are only seperated a few meters I cannot rule out the
> possibility that the animal was in the same location. Also buffering
> would not help in that case. Maybe with additional physiological data
> (e.g. low heart rate when sleeping) it would be possible to find the
> “real” duplicated points. Well, I guess my thoughts are trivial, but I´m
> curious what might be possible in the future.
>
> Many greetings, Anna



More information about the Tlocoh-info mailing list