<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN">
<html><body>
<p> </p>
<p>Also, fread works by first memory mapping the file. The first time it does this for a particular file is therefore slower (you may have noticed the longer pause the first time before the percentage counter starts). The time to memory map is reported when verbose=TRUE (but you need the formatting fix in v1.8.9 to see that time as the formatted number is messed up in v1.8.8). If you repeat the same fread call again it won't spend as long memory mapping since it's already mapped, depending on if you did anything else memory intensive on this computer/server in the meantime.</p>
<p>I don't know if base R's load() memory maps, but if it doesn't it'll need to read from disk each time. So be strictly fair, the time to compare is a "cold" read after a reboot and the first run only of fread. But in practice we often do tend to read the same file several times, so fread benefits from this. The OS caches the file in RAM for you, basically. It might do this anyway. It's all very OS and usage dependent! It may also depend on how your particular R environment has been compiled.</p>
<p>I don't think a fresh R session is enough to reproduce this effect. You need a reboot as it's the OS that caches/maps the file, not R/data.table.</p>
<p>So in short - to report the very fast time along with the time to memory map file from cold, would be the fairest and most complete way to compare.</p>
<p>Matthew</p>
<p> </p>
<p>On 11.03.2013 14:51, Matthew Dowle wrote:</p>
<blockquote type="cite" style="padding-left:5px; border-left:#1010ff 2px solid; margin-left:5px; width:100%"><!-- html ignored --><!-- head ignored --><!-- meta ignored -->
<p> </p>
<p>Exactly RAM would always be quicker. But maybe you want to read data from on-disk data.table using data.table syntax, rather than some other database or flat text file. i.e. on-disk data.table would not need to fit in RAM.</p>
<p>Benchmark sounds intriguing. Please share if you can. compress=TRUE by default so maybe the decompress takes time, though.</p>
<p> </p>
<p>On 11.03.2013 14:12, stat quant wrote:</p>
<blockquote style="padding-left: 5px; border-left: #1010ff 2px solid; margin-left: 5px; width: 100%;">
<div>
<div>Filled as #2605</div>
<div>About your ultimate goal... why would you want on-disk tables rather than RAM (apart from being able to read >RAM limit file) ? Wouldnt RAM always be quicker ?</div>
<div>I think data.table::fread is priceless because it is way faster than any other read function.</div>
<div>I just benchmarked fread reading a csv file against R loading its own .RData binary format, and shockingly fread is much faster!</div>
<div>I think it is too bad R doesn't provide a very fast way of loading objects saved from a previous R session (well why don't I do it if it is so easy...)</div>
<div> </div>
</div>
<div><br /><br /> </div>
<div class="gmail_quote">2013/3/11 stat quant <span><<a href="mailto:mail.statquant@gmail.com">mail.statquant@gmail.com</a>></span><br />
<blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; padding-left: 1ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid;">
<div>On my way to fill it in.</div>
<div> </div>
<div>About your ultimate goal... why would you want on-disk tables rather than RAM (apart from being able to read >RAM limit file) ? Wouldnt RAM always be quicker ?</div>
<div> </div>
<div>I think data.table::fread is priceless because it is way faster than any other read function.</div>
<div>I just benchmarked fread reading a csv file against R loading its own .RData binary format, and shockingly fread is much faster!</div>
<div>I think it is too bad R doesn't provide a very fast way of loading objects saved from a previous R session (well why don't I do it if it is so easy...)</div>
<div class="HOEnZb">
<div class="h5">
<div><br /><br /> </div>
<div class="gmail_quote">2013/3/11 Matthew Dowle <span><<a href="mailto:mdowle@mdowle.plus.com">mdowle@mdowle.plus.com</a>></span><br />
<blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; padding-left: 1ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid;"><span style="text-decoration: underline;"></span>
<div>
<p> </p>
<p>Good idea statquant, please file it then. How about something more general e.g.</p>
<p> fread(input, chunk.nrows=10000, chunk.filter =)</p>
<p>Thatcould be grep() or any expression of column names. It wouldn't be efficient to call that for every row one by one and similarly couldn't be called for the whole DT, since the point is that DT is greater than RAM. So some batch size need be defined hence chunk.nrows=10000. That filter would then be called for each chunk and any rows passing would make it into the final table.</p>
<p>read.ffdf has something like this I believe, and Jens already suggested that when I ran the timings in example(fread) past him. We should probably follow his lead on that in terms of argument names etc.</p>
<p>Perhaps chunk should be defined in terms of RAM e.g. chunk=100MB. Since that is how it needs to be internally, in terms of number of pages to map. Or maybe both as nrows or MB would be acceptable.</p>
<p>Ultimately (maybe in 5 years!) we're heading towards fread reading into on-disk tables rather than RAM. Filtering in chunks will always be a good option to have though, even then, as you might want to filter what makes it to the on-disk table.</p>
<p>Matthew</p>
<div>
<div>
<p> </p>
<p>On 11.03.2013 12:53, MICHELE DE MEO wrote:</p>
<blockquote style="width: 100%; padding-left: 5px; margin-left: 5px; border-left-color: #1010ff; border-left-width: 2px; border-left-style: solid;">
<div dir="ltr">Very interesting request. I also would be interested in this possibility.
<div>Cheers</div>
<div class="gmail_extra"><br /><br />
<div class="gmail_quote">2013/3/11 stat quant <span><<a href="mailto:statquant@outlook.com">statquant@outlook.com</a>></span><br />
<blockquote class="gmail_quote" style="margin: 0px 0px 0px 0.8ex; padding-left: 1ex; border-left-color: #cccccc; border-left-width: 1px; border-left-style: solid;">
<div>Hello list,</div>
<div>We like FREAD because it is very fast, yet sometimes files are huge and R cannot handle that much data, some packages handle this limitation but they do not provide a similar to fread function.</div>
<div>Yet sometimes only subsets of a file is really needed, subsets that could fit into RAM.</div>
<div> </div>
<div>So what about adding a grep option to fread that would allow to load only lines that matches a regular expression?</div>
<div> </div>
<div>I'll add a request if you think the idea is worth implementing.</div>
<div> </div>
<div>Cheers</div>
<div> </div>
<br />_______________________________________________<br /> datatable-help mailing list<br /><a href="mailto:datatable-help@lists.r-forge.r-project.org">datatable-help@lists.r-forge.r-project.org</a><br /><a href="https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/datatable-help">https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/datatable-help</a></blockquote>
</div>
<br /><br clear="all" />-- <br />
<div><em><strong><span style="font-size: xx-small;">*************************************************************</span></strong></em></div>
<div><em><strong><span style="font-size: xx-small;">Michele De Meo, Ph.D</span></strong></em></div>
<div><em><span style="font-size: xx-small;"><span>Statistical and data mining solutions</span><br /><a style="color: #1c51a8;" href="http://micheledemeo.blogspot.com/">http://micheledemeo.<span style="color: #222222; background-color: #ffffcc;">blogspot</span>.com/</a><br /> skype: demeo.michele</span></em></div>
<div style="font-size: 15px;"><em><span></span><span></span><br /></em></div>
</div>
</div>
</blockquote>
<p> </p>
<div> </div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
<p> </p>
<div> </div>
</blockquote>
<p> </p>
<div> </div>
</body></html>