From niklasdornbusch at web.de Thu Mar 1 12:05:24 2018 From: niklasdornbusch at web.de (Dirkules) Date: Thu, 1 Mar 2018 04:05:24 -0700 (MST) Subject: [datatable-help] computation of variance covariance matrix for weighted least squares Message-ID: <1519902324390-0.post@n4.nabble.com> Hi, I am trying to compute the variance covariance matrix in weighted least squares. Using the lm function gives me a different result as when I am computing it myself. Any idea what I am doing incorrectly? My calculation for the beta coefficients seem correct though. Here is my code library('foreign') download.file('http://fmwww.bc.edu/ec-p/data/wooldridge/smoke.dta','smoke.dta',mode='wb') smoke<-read.dta('smoke.dta') ###################################################################################### #computation of my weighting matrix "hhat" lm.8.7<-lm(cigs ~ lincome + lcigpric + educ + age + agesq + restaurn, data=smoke) lres.u<-log(lm.8.7$residuals^2) lm.8.7gls.u<-lm(lres.u ~ lincome + lcigpric + educ + age + agesq + restaurn, data=smoke) hhat<-exp(lm.8.7gls.u$fitted.values) # estimating with weights hhat lm.8.7gls<-lm(cigs ~ lincome + lcigpric + educ + age + agesq + restaurn, weights=1/hhat, data=smoke) ######################################################################################## # Doing the estimation myself u <- model.frame(cigs ~ lincome + lcigpric + educ + age + agesq + restaurn, data=smoke) x <- model.matrix(u, smoke) y <- model.response(u) wg <- 1/hhat hh <- diag(wg) beta <- solve(t(x) %*% hh %*% x) %*% (t(x) %*% hh %*% y) # estimate of beta dSigmaSq <- sum((y - x%*%beta)^2)/(nrow(x)-ncol(x)) # estimate of sigma-squared vcm <- dSigmaSq * solve(t(x) %*% hh %*% x) ## compute variance covariance matrix t(beta) lm.8.7gls$coefficients vcm vcov(lm.8.7gls) Many thanks! -- Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html From aroos at terpmail.umd.edu Sat Mar 3 05:19:36 2018 From: aroos at terpmail.umd.edu (bigappleanalyst) Date: Fri, 2 Mar 2018 21:19:36 -0700 (MST) Subject: [datatable-help] Vectors & Ifelse Logic - Won't Populate Vector Message-ID: <1520050776022-0.post@n4.nabble.com> I'm creating a function that takes two inputs: delta, and length. The function modifies an existing vector - let's call this vector base - by delta for 'length' specified elements. I need to create a new vector, let's call it 'adjusted'. This vector, adjusted, needs to populate based on one condition: If the index value for the base vector is less than the length parameter specified in the function input, then I want to increase the base vector by delta for all elements less than length. All elements in the base vector greater than the specified length parameter are to remain the same as in base vector. Here is an example of my code: I have tried to implement the logic in two ways. 1. Using an ifelse because it allows for vectorized output. 2. Using an if else logic. adjusted <- vector() #initialize the vector. #Create a function. Adjustment <- function(delta, length) { ifelse((index < length), (adjusted <- base + delta), (adjusted <- base)) head(adjusted) } #Method 2. Adjustment <- function(delta, length) { if (index < length) { adjusted <- base + delta } else {adjusted <- base} head(adjusted) } Where I am experiencing trouble is with my newly created vector, adjusted, will not populate. I'm not quite sure what is happening - I am very new to R and any insight would be much appreciated. -- Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html From aroos at terpmail.umd.edu Sun Mar 4 01:06:47 2018 From: aroos at terpmail.umd.edu (bigappleanalyst) Date: Sat, 3 Mar 2018 17:06:47 -0700 (MST) Subject: [datatable-help] How to implement function across groups and save unique output Message-ID: <1520122007031-0.post@n4.nabble.com> I have a flat file that I imported into R. The file is 10,000 rows. In the file there is a date variable, and a group variable. There are 100 groups, so the group variable shows a value of 1 for rows 1:100, a value of 2 for rows 101:200, and so on for all 100 groups. The date variable has 100 unique entries, and repeats/restarts for each group. Thus, the dates in rows 1:100 are identical to the dates in rows 101:200, 201:300, and so on. I wrote a function that manipulates a vector in the flat file, vector_1. As it stands, it manipulates vector_1 across all 10,000 rows. Vector_1 has unique entries in rows 1:100, 101:200, 201:300, and so on. Thus, I would like to implement a parameter so the function only calculates across the specified range. If I input group = 1, then the function will only perform the calculation on vector_1 for rows 1:100. Here is my function: adjusted <- c() Adjustment <- function(delta, length) { adjusted <<- vector_1 + delta*(index <= length) head(adjusted) } Here is a hard-coded example of what I would like to achieve: adjusted <- c() Adjustment <- function(delta, length, group = 1) { adjusted <<- vector_1[1:100] + delta*(index <= length) head(adjusted) } I would like to implement a parameter or a loop that performs the calculation over the corresponding range of vector_1 for the inputted group parameter value. For example, if I instead entered group =2 the function would look like: adjusted <- c() Adjustment <- function(delta, length, group = 2) { adjusted <<- vector_1[101:200] + delta*(index <= length) head(adjusted) } How would I achieve this? How do I manipulate the original group, date, and vector_1 vectors to allow for this modular calculation? Finally, I would like the function to store a unique "adjusted" vector for all 100 groups. For example, adjusted_1, adjusted_2, and so on... I found that the dplyr package may be useful for this, but I haven't been successful in implementing it. Any insight would be much appreciated! -- Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html From by.hook.or at gmail.com Sun Mar 4 01:51:33 2018 From: by.hook.or at gmail.com (Frank Erickson) Date: Sat, 3 Mar 2018 19:51:33 -0500 Subject: [datatable-help] How to implement function across groups and save unique output In-Reply-To: <1520122007031-0.post@n4.nabble.com> References: <1520122007031-0.post@n4.nabble.com> Message-ID: Hi, This is a mailing list related to the data.table package, not general R help. See https://www.r-project.org/help.html If you're interested in dplyr, its forum might be a good place to learn: https://community.rstudio.com/t/faq-whats-a-reproducible-example-reprex-and-how-do-i-do-one/5219 Best, Frank On Sat, Mar 3, 2018 at 7:06 PM, bigappleanalyst wrote: > I have a flat file that I imported into R. The file is 10,000 rows. In the > file there is a date variable, and a group variable. There are 100 groups, > so the group variable shows a value of 1 for rows 1:100, a value of 2 for > rows 101:200, and so on for all 100 groups. The date variable has 100 > unique > entries, and repeats/restarts for each group. Thus, the dates in rows 1:100 > are identical to the dates in rows 101:200, 201:300, and so on. > > I wrote a function that manipulates a vector in the flat file, vector_1. As > it stands, it manipulates vector_1 across all 10,000 rows. Vector_1 has > unique entries in rows 1:100, 101:200, 201:300, and so on. Thus, I would > like to implement a parameter so the function only calculates across the > specified range. If I input group = 1, then the function will only perform > the calculation on vector_1 for rows 1:100. > > Here is my function: > > adjusted <- c() > Adjustment <- function(delta, length) { > adjusted <<- vector_1 + delta*(index <= length) > head(adjusted) > } > > Here is a hard-coded example of what I would like to achieve: > > adjusted <- c() > Adjustment <- function(delta, length, group = 1) { > adjusted <<- vector_1[1:100] + delta*(index <= length) > head(adjusted) > } > > I would like to implement a parameter or a loop that performs the > calculation over the corresponding range of vector_1 for the inputted group > parameter value. For example, if I instead entered group =2 the function > would look like: > > adjusted <- c() > Adjustment <- function(delta, length, group = 2) { > adjusted <<- vector_1[101:200] + delta*(index <= length) > head(adjusted) > } > > How would I achieve this? How do I manipulate the original group, date, and > vector_1 vectors to allow for this modular calculation? > > Finally, I would like the function to store a unique "adjusted" vector for > all 100 groups. For example, adjusted_1, adjusted_2, and so on... > > I found that the dplyr package may be useful for this, but I haven't been > successful in implementing it. > > Any insight would be much appreciated! > > > > -- > Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html > _______________________________________________ > datatable-help mailing list > datatable-help at lists.r-forge.r-project.org > https://lists.r-forge.r-project.org/cgi-bin/mailman/ > listinfo/datatable-help > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tosaar at outlook.com Wed Mar 7 07:55:50 2018 From: tosaar at outlook.com (Saar) Date: Tue, 6 Mar 2018 23:55:50 -0700 (MST) Subject: [datatable-help] problems with a new R installation Message-ID: <1520405750567-0.post@n4.nabble.com> HiI hope this is the right place, I have recently installed R and it seems that anything that has to do with packages does not seem to work for me. i'm getting an error massage whenever I'm trying to do something that has to do with Internet servers, this more or less the massage:starting httpd help server ...Error in startDynamicHelp(TRUE) : internet routines cannot be loadedIn addition: Warning message:In startDynamicHelp(TRUE) : unable to load shared object 'D:/PROGRA~1/R/R-34~1.3/modules/i386/internet.dll': LoadLibrary failure: The specified procedure could not be found.this does not allow me to install any packages and use R effectivelyso if anyone has an idea how to resolve this or where is the right place to address this to, sharing this knowledge with me will be thanked and appreciated:-)Saar -- Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthew.forrest at senckenberg.de Wed Mar 7 11:04:25 2018 From: matthew.forrest at senckenberg.de (Matthew Forrest) Date: Wed, 7 Mar 2018 11:04:25 +0100 Subject: [datatable-help] aggregate data.table by group by reference Message-ID: Hi all, A real data.table question here ;-) I am currently doing this: output.dt <- input.dt[,lapply(.SD, method.function), by=by.dims] where method.function is an arbitrary function and by.dims is a list of columns (although perhaps I could remove this is the data.table was keyed by those columns?) That doesn't look very data.table-y to me, potentially it would be great to do it with := in the proper data.table way.? Is that possible?? I have tried a couple of things but haven't managed. The problem is that I need to be flexible to deal with arbitrary columns in my data.table. Any suggestions? Many thanks, Matt -- Dr Matthew Forrest Senckenberg Biodiversity and Climate Research Centre (BiK-F) Visiting address: Georg-Voigt-Stra?e 14-16, room 3.04, Frankfurt am Main Postal address: Senckenberganlage 25 D-60325 Frankfurt am Main Tel.: +49-69-7542-1867 Fax: +49-69-7542-7904 E-mail: matthew.forrest at senckenberg.de Homepage: www.bik-f.de Senckenberg Gesellschaft f?r Naturforschung Rechtsf?higer Verein gem?? ? 22 BGB Senckenberganlage 25 60325 Frankfurt Direktorium: Prof. Dr. Dr. h.c. Volker Mosbrugger, Prof. Dr. Andreas Mulch, Stephanie Schwedhelm, Prof. Dr. Katrin B?hning-Gaese, Prof. Dr. Uwe Fritz, PD Dr. Ingrid Kr?ncke Pr?sidentin: Dr. h.c. Beate Heraeus Aufsichtsbeh?rde: Magistrat der Stadt Frankfurt am Main (Ordnungsamt) From kpm.nachtmann at gmail.com Wed Mar 7 16:38:20 2018 From: kpm.nachtmann at gmail.com (Gerhard Nachtmann) Date: Wed, 7 Mar 2018 16:38:20 +0100 Subject: [datatable-help] aggregate data.table by group by reference In-Reply-To: References: Message-ID: Hi! You didn't provide an reproducable example (repex), so I created one for you including the solution. Consider you want to add the median values for each column of the iris dataset by species. iris <- as.data.table(iris) iris[iris[, lapply(.SD, median), keyby = "Species"], paste0(names(iris), ".med") := get(paste0("i.", names(iris))), on = "Species"] Cheers, Gerhard 2018-03-07 11:04 GMT+01:00 Matthew Forrest : > Hi all, > > A real data.table question here ;-) > > I am currently doing this: > > output.dt <- input.dt[,lapply(.SD, method.function), by=by.dims] > > where method.function is an arbitrary function and by.dims is a list of > columns (although perhaps I could remove this is the data.table was keyed by > those columns?) > > That doesn't look very data.table-y to me, potentially it would be great to > do it with := in the proper data.table way. Is that possible? I have tried > a couple of things but haven't managed. The problem is that I need to be > flexible to deal with arbitrary columns in my data.table. > > Any suggestions? > > Many thanks, > > Matt > > > -- > Dr Matthew Forrest > Senckenberg Biodiversity and Climate Research Centre (BiK-F) > Visiting address: Georg-Voigt-Stra?e 14-16, room 3.04, Frankfurt am Main > Postal address: Senckenberganlage 25 > D-60325 Frankfurt am Main > Tel.: +49-69-7542-1867 > Fax: +49-69-7542-7904 > E-mail: matthew.forrest at senckenberg.de > Homepage: www.bik-f.de > > Senckenberg Gesellschaft f?r Naturforschung > Rechtsf?higer Verein gem?? ? 22 BGB > Senckenberganlage 25 > 60325 Frankfurt > > Direktorium: Prof. Dr. Dr. h.c. Volker Mosbrugger, Prof. Dr. Andreas Mulch, > Stephanie Schwedhelm, Prof. Dr. Katrin B?hning-Gaese, Prof. Dr. Uwe Fritz, > PD Dr. Ingrid Kr?ncke > > Pr?sidentin: Dr. h.c. Beate Heraeus > > Aufsichtsbeh?rde: Magistrat der Stadt Frankfurt am Main (Ordnungsamt) > > _______________________________________________ > datatable-help mailing list > datatable-help at lists.r-forge.r-project.org > https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/datatable-help From jeiaiel at gmail.com Sat Mar 10 19:04:45 2018 From: jeiaiel at gmail.com (Susan Kim) Date: Sat, 10 Mar 2018 11:04:45 -0700 (MST) Subject: [datatable-help] What R packages do you find most useful in your daily work? Message-ID: <1520705085288-0.post@n4.nabble.com> Are there any R packages that are good to have that you use frequently? Across multiple areas regardless of the type of work you are doing? If so, what are these packages? -- Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html From jonathanxk at 163.com Wed Mar 14 04:41:16 2018 From: jonathanxk at 163.com (xelnaga) Date: Tue, 13 Mar 2018 20:41:16 -0700 (MST) Subject: [datatable-help] plot the probability density function. Message-ID: <1520998876879-0.post@n4.nabble.com> How to plot the probability density function of gamma, exponential and lognormal distribution if I got x represent the insurance claim of portfolio(such as 24,26,73,84.......). I really need help, thank you! -- Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html From akj2784 at gmail.com Fri Mar 23 13:03:54 2018 From: akj2784 at gmail.com (akj2784) Date: Fri, 23 Mar 2018 05:03:54 -0700 (MST) Subject: [datatable-help] Stacked Waterfall chart in R Message-ID: <1521806634713-0.post@n4.nabble.com> Hi All, Is there any way we can create Stacked Waterfall chart using R Programming. ? I have seen various blogs to create simple waterfall chart but could not find anything for Stacked waterfall wherein I want to split the bar based on the sub category. Any help would be really appreciated. Regards, Akash -- Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html From brettknoss at protonmail.com Fri Mar 23 20:36:54 2018 From: brettknoss at protonmail.com (brettknoss) Date: Fri, 23 Mar 2018 12:36:54 -0700 (MST) Subject: [datatable-help] entering a matrix from a spreadsheet into r Message-ID: <1521833814011-0.post@n4.nabble.com> I created a matrix in LibreCalc, and saved it as a .cvs and .xlsx document. When I enter these tables, using the read command (read.cvs read.xlsx). But when I print out these matrices, the .cvs has the columns properly labled, with the rows labled as number, and my names under the column X_1. With the .xlsx document, everything is labled correctly, but there is a row in grey. . What do these mean, and does X__1 mean lables? I'd also like help turning my matrix into a social network, by definig vectors and matrix argument. -- Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html From jason.fukunaga at gmail.com Sun Mar 25 08:57:06 2018 From: jason.fukunaga at gmail.com (porcupine_racer) Date: Sat, 24 Mar 2018 23:57:06 -0700 (MST) Subject: [datatable-help] Having a problem with plotting: Error in plot.window(...) : infinite axis extents [GEPretty(-inf, inf, 5)] Message-ID: <1521961026240-0.post@n4.nabble.com> I'm having some issue with generating the plots for this code, I'm not sure why. Also, the dates are applying 1970, instead of the correct 2015 year, can anyone help please? I'm an uber noob when it comes to all this coding :( I keep getting this code for part of my code, and the solution may be nonunique for almost all, even though plots are produced. Error in plot.window(...) : infinite axis extents [GEPretty(-inf,inf,5)] In addition: Warning messages: 1: In rq.fit.br(x, y, tau = tau, ci = TRUE, ...) : Solution may be nonunique 2: In rq.fit.br(x, y, tau = tau, ci = TRUE, ...) : Solution may be nonunique 3: In rq.fit.br(x, y, tau = tau, ci = TRUE, ...) : Solution may be nonunique 4: In rq.fit.br(x, y, tau = tau, ci = TRUE, ...) : Solution may be nonunique Here's my code: --- title: "Final_Project" output: flexdashboard::flex_dashboard: orientation: columns vertical_layout: fill runtime: shiny --- ```{r setup, include=FALSE} library(ggplot2) library(flexdashboard) library(shiny) library(QRM) library(qrmdata) library(xts) library(zoo) library(psych) library(quadprog) library(matrixStats) library(quantreg) library(moments) library(plotly) library(quantmod) library(TTS) rm(list = ls()) start <- as.Date("2015-01-01") end <- as.Date("2017-12-31") tickers <- c("GOOG","AMZN","AAPL","MSFT","GOVT") asset.price <- NULL #download stock prices of four stocks through for loop for(ticker in tickers) asset.price <- cbind(asset.price,getSymbols(ticker, from = start, to = end,auto.assign = F)) cls.price <- asset.price[,c(4,10,16,22)]#this will select only the closing price head(cls.price) RF <- asset.price[,28] rf.rate <- RF/365 rf.mean <- mean(rf.rate) plot(cls.price) colnames(cls.price) <- tickers[-5] class(cls.price) #method 1 of calculating return asset.ret <- ROC(cls.price,type="discrete")*100 class(asset.ret) head(asset.ret) asset.ret.df <- data.frame(na.omit(asset.ret)) class(asset.ret.df) ccf.values <- ccf(asset.ret.df[,1],asset.ret.df[,2]) plot(ccf.values) ``` Sensitivity Analysis ======================================================================= Row {.tabset .tabset-fade} ----------------------------------------------------------------------- ### GOOG-AMZN {data-height=650} ```{r} data.r <- diff(log(as.matrix(cls.price)))*100 # Create size and direction size <- na.omit(abs(data.r)) # size is indicator of volatility colnames(size) <- paste(colnames(size), ".size", sep = "") # Teetor direction <- ifelse(data.r > 0, 1, ifelse(data.r < 0, -1, 0)) # another indicator of volatility colnames(direction) <- paste(colnames(direction), ".dir", sep = "") dates <- as.Date(index(data.r)) dates.chr <- as.character(index(data.r)) # str(dates.chr) values <- cbind(data.r, size, direction) data.df <- data.frame(dates = dates, returns = data.r, size = size, direction = direction) data.xts <- na.omit(as.xts(values, dates)) #order.by=as.Date(dates, '%d/%m/%Y'))) # str(data.xts) returns <- data.xts corr.rolling <- function(x) { dim <- ncol(x) corr.r <- cor(x)[lower.tri(diag(dim), diag = FALSE)] return(corr.r) } ALL.r <- data.xts[, 1:4] # Only four series here window <- 90 #reactive({input$window}) corr.returns <- rollapply(ALL.r, width = window, corr.rolling, align = "right", by.column = FALSE) colnames(corr.returns) <- c('Google & Amazon', 'Google & Apple', 'Google & Microsoft', 'Amazon & Apple', 'Amazon & Microsoft', 'Apple & Microsoft') corr.returns.df <- data.frame(Date = index(corr.returns), GOOG.AMZN = corr.returns[, 1], GOOG.AAPL = corr.returns[, 2], GOOG.MSFT = corr.returns[, 3], AMZN.AAPL = corr.returns[, 4], AMZN.MSFT = corr.returns[, 5], AAPL.MSFT = corr.returns[, 6]) # Market dependencies R.corr <- apply.monthly(as.xts(ALL.r), FUN = cor) R.vols <- apply.monthly(ALL.r, FUN = colSds) # from MatrixStats\t # Form correlation matrix for one month R.corr.1 <- matrix(R.corr[20, ], nrow = 4, ncol = 4, byrow = FALSE) rownames(R.corr.1) <- colnames(ALL.r[, 1:4]) colnames(R.corr.1) <- rownames(R.corr.1) # Correlation data R.corr <- R.corr[, c(2, 3, 4, 7, 8, 12)] colnames(R.corr) <- colnames(corr.returns) colnames(R.vols) <- c("GOOG.vols", "AMZN.vols", "AAPL.vols", "MSFT.vols") R.corr.vols <- na.omit(merge(R.corr, R.vols)) GOOG.vols <- as.numeric(R.corr.vols[,"GOOG.vols"]) AMZN.vols <- as.numeric(R.corr.vols[,"AMZN.vols"]) AAPL.vols <- as.numeric(R.corr.vols[,"AAPL.vols"]) MSFT.vols <- as.numeric(R.corr.vols[,"MSFT.vols"]) GOOG.AMZN.corrs <- R.corr.vols[, 1] taus <- seq(0.05, 0.95, 0.05) fit.rq.GOOG.AMZN <- rq(GOOG.AMZN.corrs ~ AMZN.vols, tau = taus) fit.lm.GOOG.AMZN <- lm(GOOG.AMZN.corrs ~ AMZN.vols) plot(summary(fit.rq.GOOG.AMZN), parm = "AMZN.vols", main = "Google-Amazon Correlation Sensitivity to Amazon Volatility") ``` ### GOOG-AAPL ```{r} GOOG.AAPL.corrs <- R.corr.vols[, 2] taus <- seq(0.05, 0.95, 0.05) fit.rq.GOOG.AAPL <- rq(GOOG.AAPL.corrs ~ AAPL.vols, tau = taus) fit.lm.GOOG.AAPL <- lm(GOOG.AAPL.corrs ~ AAPL.vols) plot(summary(fit.rq.GOOG.AAPL), parm = "AAPL.vols", main = "Google-Apple Correlation Sensitivity to Apple Volatility") ``` ### GOOG-MSFT ```{r} GOOG.MSFT.corrs <- R.corr.vols[, 3] taus <- seq(0.05, 0.95, 0.05) fit.rq.GOOG.MSFT <- rq(GOOG.MSFT.corrs ~ MSFT.vols, tau = taus) fit.lm.GOOG.MSFT <- lm(GOOG.MSFT.corrs ~ MSFT.vols) plot(summary(fit.rq.GOOG.MSFT), parm = "MSFT.vols", main = "Google-Microsoft Correlation Sensitivity to Microsoft Volatility") ``` ### AMZN-AAPL ```{r} AMZN.AAPL.corrs <- R.corr.vols[, 4] taus <- seq(0.05, 0.95, 0.05) fit.rq.AMZN.AAPL <- rq(AMZN.AAPL.corrs ~ AAPL.vols, tau = taus) fit.lm.AMZN.AAPL <- lm(AMZN.AAPL.corrs ~ AAPL.vols) plot(summary(fit.rq.AMZN.AAPL), parm = "AAPL.vols", main = "Amazon-Apple Correlation Sensitivity to Apple Volatility") ``` ### AMZN-MSFT ```{r} AMZN.MSFT.corrs <- R.corr.vols[, 5] taus <- seq(0.05, 0.95, 0.05) fit.rq.AMZN.MSFT <- rq(AMZN.MSFT.corrs ~ MSFT.vols, tau = taus) fit.lm.AMZN.MSFT <- lm(AMZN.MSFT.corrs ~ MSFT.vols) plot(summary(fit.rq.AMZN.MSFT), parm = "MSFT.vols", main = "Amazon-Microsoft Correlation Sensitivity to Microsoft Volatility") ``` ### AAPL-MSFT ```{r} AAPL.MSFT.corrs <- R.corr.vols[, 5] taus <- seq(0.05, 0.95, 0.05) fit.rq.AAPL.MSFT <- rq(AAPL.MSFT.corrs ~ MSFT.vols, tau = taus) fit.lm.AAPL.MSFT <- lm(AAPL.MSFT.corrs ~ MSFT.vols) plot(summary(fit.rq.AAPL.MSFT), parm = "MSFT.vols", main = "Apple-Microsoft Correlation Sensitivity to Microsoft Volatility") ``` Column {data-height=150} ----------------------------------------------------------------------- ### Explanation of Sensitivity Charts Explain here Thank you so much for any help you can give. I'm sorry if this isn't the right way to post a message. Jason -- Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html From Stanton.Hudja at gmail.com Tue Mar 27 18:20:28 2018 From: Stanton.Hudja at gmail.com (Stanton Hudja) Date: Tue, 27 Mar 2018 09:20:28 -0700 (MST) Subject: [datatable-help] Analyzing reweighted survival data that is correlated Message-ID: <1522167628406-0.post@n4.nabble.com> I'm currently analyzing data from an experiment where I have multiple observations from each subject. I have re-weighted the data to better approximate the data generating process. I have also created adjusted survival curves using the Kaplan-Meier estimator for a control and treatment group but I am unsure as the best way to test for significance. The best approach I have thought of so far is to take the means of each subject under the weighted KM and run hypothesis tests on that. The log-rank test would have an issue with the correlated data. The cox ph doesn't seem to have proportional hazards, although I don't think the effect of the non-proportional hazards matters much with this data. I would appreciate any advice on any of these tests. Thanks, Stanton -- Sent from: http://r.789695.n4.nabble.com/datatable-help-f2315188.html