[Rcicr-users] 2 questions

Dotsch, R. (Ron) R.Dotsch at uu.nl
Wed Nov 26 13:21:47 CET 2014


Hi Thomas,

> I generated stimuli for a judgment in 512 x 512, and present original and inverse side by side. What we noticed though is that the distance from the screen seems to matter when judging the pictures. I guess that is related to the literature on the relation of spatial frequences and facial features. But my question is more practical: Did you ever try to look a this systematically, and is there an ideal size-to-distance relation at which the generated pictures should be judged? My feeling is that 512 x 512 is too large.
> 
Yes, this depends on different perception of spatial frequencies depending on distance to the screen, but is also a function of screen resolution and screen size. On big screens or low resolution settings, it makes sense to revert to 256x256 or even to 128x128. As long as your base image has the size you want the final images to have and set the appropriate parameters for generating the stimuli for this size, rcicr should not have any trouble with it.

In principle, you want to keep everything constant including distance to the screen, which is why lots of researchers use chin rests. This will give you great control and optimal conditions. In my work I have not always done this. It didn’t affect the general results that much, but you lose the ability to interpret which spatial scale is diagnostic.

> I should note that I collect data online via qualtrics, so there is variance in the display size. But I could adapt image size to display size if I knew the ideal size.

That’s difficult, because the ideal size is not just specific number of pixels, but depends on the factors above. A 256x256 image will look as big on 640x480 screen resolution as a 512x512 image will on 1280x960 resolution, given the same display size and seating distance. In programs like Inquisit or PsychoPy you can use relative sizes (like 33% of the screen), which effectively creates resolution indepent RC tasks. 

> 
> the other, more concrete question: when I generate CIs with the R scripts, I get output images like myPic_subject_1_autoscaled.jpg for subject and mnes_trait_myTrait_autoscaled.jpg, but I also get ci_myPic_subject_1.jpg for every single subject, and these all look the same, and seem to be the base image. Is that correct? Or should they actually show different outputs? I am confused by having so many identical pictures.

The autoscaled are the images where the noise has been scaled with a constant inferred from all the generated CIs  relative to the base image. This maximizes the contribution of the noise to the classification image without transforming the noise non-linearly across multiple images. The ones that are unscaled are still base image + noise. If they look the same, it means that the base image has a dynamic range of pixel luminance values that’s far greater than the noise does. In most cases this means that you can ignore the unscaled images. Maybe I should not have the function output these images, but they do prompt you to ask this question and learn about how the noise is scaled, which is very important. So I think I might leave them in, or add an argument to the function to explicitly suppress output (in a not by default, or opt-out fashion).

Best,

Ron


________________________________
Dr. Ron Dotsch

Utrecht University
Social and Organizational Psychology (Room E2.22)

Website: http://ron.dotsch.org


More information about the Rcicr-users mailing list