DMDX Help.


Remote Testing Overview.


    I used to get asked from time to time if DMDX can be used over the web and the answer was always pfft.  However nowadays remote testing is big stuff and the ways one can use DMDX remotely have proliferated.  Basically there are three methods these days and the only one you're really going to be interested in is the third one.  The first relies on SMTP ports (email) being open on the wider internet (not the case) so remote testing using my emailer is only relevant if you want to test on your own local network where you know the ports are open.  The second method requires an email account on campus here at the University of Arizona and I'm guessing that's not happening for anyone other than us.  The last method using a HTTP POST to get the data back however is applicable to wider testing by bodies other than us.  That said, the background to what's going on and considerations you need to be aware of are still present in the original first two methods so until I get around to rewriting them you're going to want to at least read them before really concentrating on the last one.


Remote Testing Using SMTP (email).

    Originally I came up with a mechanism for a study here where subjects could not be expected to come in to the lab.  Basically what I did was zip up a DMDX executable, an item file, a program of mine to send email and a batch file to run the whole shebang into a self extracting and executing archive.  Subjects only have to be able to run a program from an URL, where the only choices they can make are basically to do it or not.

    You need to have some way of making a self extracting zip file execute a batch file, appropriating a self installer works well enough -- I used Winzip with it's Winzip Self Extractor package.  Not freeware but cheap enough.  You can't be expecting miracles in accurate timing either as we have to use DMDX's EZ mode where there's no synchronization with the raster (although see the later -auto option).  However I'm betting +/- a tick is going to still be a whole lot better than some custom bit of javascript running in a browser.  Not to mention a whole lot easier to use as you can use all of DMDX's capabilities instead of having to write new javascript for every single thing...  You'll also need a SMTP server (although that might not even be the case any more, see the HTTP POST method below) that doesn't require all sorts of authentication unless you're happy sticking passwords into batch files (which I really don't recommend).  I recently found SMTP2GO that offers these kids of services so you might investigate them.

    So first off the script.  WinZip will have created a temporary directory and extracted all the files there, it'll be the current directory so as long as there's no path information on anything DMDX should be able to find images and so on.  The only thing that's different here is using the desktop's video mode with <vm desktop>, but that's normal for EZ mode.  The emailer I'm using is my custom code and it's not super (it certainly won't deal with SSL email server connections) but there are a number of other programs out there if you need them.  If you want to use mine it's on the DMDX page in a an example of the remote testing I'm describing, http://www.u.arizona.edu/~jforster/dmdx/commstest.exe (you'll have to pull it out of the .EXE), it's also in DMDXUTILS.ZIP.  Then the batch file to run them:

start /wait "DMDX" dmdx.exe -ez -buffer 2 -run eztest.rtf
start /wait "sending results" sendemail.exe -hsmtpserver.yourdomain.org tester@yourdomain.org "ez testing" eztest.azk


    The first line runs DMDX in EZ mode and waits for it to finish.  It runs it with a limited number of video buffers (because who knows how wretched the destination machine is) and tells it to run our item file, in this case eztest.rtf.  Once DMDX has finished the batch file runs the emailer and tells it to send eztest.azk which DMDX will have left behind after running eztest.rtf.  If you're using my emailer you'll have to tell it the name of your SMTP machine with -h because the UofA's server sure ain't gonna accept connections that aren't SSL from anywhere off campus.  Once the emailer is done WinZip deletes the files and temporary directory it put the files in and about the only thing left will be a few registry keys.  It won't need to execute as an administrator.  For testing purposes you can stick a pause command at the end of the batch file if things aren't working and you need to find out what's up.  As the batch file is paused you can go look in C:\Documents and Settings\Username\Local Settings\Temp\WZXXXX.XX and see the files before they get whacked.

    You'll want to drag all your files to WinZip to make the initial .ZIP file, images and the files I've mentioned all go in.  I guess if you really cared you could use a subdirectory for images and sound files but I'm not.  Next you'll tell WinZIp to make a self extracting archive out of it and once you've bought the Self Extractor extension you can tell it to make an archive for software installation.  I included an optional message to the users when the extractor is first extracting telling them that DMDX is sensitive to other applications popping up windows and for them to log out of IM sessions and to otherwise disable anything that might pop up a window as DMDX is running.  When it asks for the name of the command to run you tell it the name of your batch file, here it's eztest.bat, but you'll want to put a .\ in front of it as they recommend (so .\eztest.bat).  And then a few more prompts and you'll have your .EXE that you can stick on a web page and tell users to point their browsers at.  They'll have to actually run the thing and answer all the security nags but it doesn't have to run as administrator (for Vista should anyone be using it) and should be fairly straight forward.  Hopefully you get an email with the subject "ez testing" with the .AZK for it's body.

    An extension to the batch file I recently made was to make it send the diagnostics if the run failed with a couple of IF EXIST statements in the batch file.  Makes it much easier to figure out what went wrong if things fail:

start /wait "DMDX" dmdx.exe -ez -buffer 2 -run eztest.rtf
if exist eztest.azk start /wait "sending results" sendemail.exe -hsmtpserver.yourdomain.org tester@yourdomain.org "ez testing" eztest.azk
if not exist eztest.azk start /wait "sending diagnostics" sendemail.exe -hsmtpserver.yourdomain.org tester@yourdomain.org "ez diagnostics" diagnostics.txt

    And then there's the ultimate emailer script that actually tries different ports if one is blocked (now that sendemail has been expanded with a -p switch for the port number):

start /wait "DMDX" dmdx.exe -ez -buffers 2 -run eztest.rtf
if not exist eztest.azk goto diags
sendemail.exe -hpsy1.psych.arizona.edu jforster@psy1.psych.arizona.edu "ez testing results" eztest.azk
if errorlevel 1 sendemail.exe -p2525 -hpsy1.psych.arizona.edu jforster@psy1.psych.arizona.edu "ez testing 2525 results" eztest.azk
goto end
:diags
sendemail.exe -hpsy1.psych.arizona.edu jforster@psy1.psych.arizona.edu "ez testing diagnostics" diagnostics.txt
if errorlevel 1 sendemail.exe -p2525 -hpsy1.psych.arizona.edu jforster@psy1.psych.arizona.edu "ez testing 2525 diagnostics" diagnostics.txt
:end
 


Wider Internet Testing using an HTTP POST.

    So with another study here needing to run on the wider Internet as opposed to just across campus as the earlier study had needed I set out to test how widely blocked alternative mail ports are across the globe. Turns out they're widely blocked which pretty much rules out using SMTP (email) across anything other than a relatively controlled network. While I could have tried lots of different ports and maybe I would have found one that hadn't ever been used for SMTP before I suspect they all would have met with less than 100% success -- not to mention a lot of tester fatigue. Instead I wrote a program to POST the results over HTTP on port 80 as if it were a browser filling in a form that went to a script on one of my servers and that then sent email on to the researchers. Kind of round about I admit, however it works and to date the only problem with it revolves around personal firewalls needing to be told that the program posting the results needs to be allowed to do so. Most users savvy enough to have a personal firewall are fairly used to this and those that aren't savvy are used to just clicking on OK anyway so it's a moot problem.

    While the following discussion revolves around using our server there's nothing stopping someone using another server and writing their own form to capture POSTed data to a database or whatever, the poster.exe utility I wrote is eminently flexible.

    The larger issue for anyone else trying to do a study like this is the script on a server that sends the form results as an email. While scripts that email things are fairly common it is something that's going to require someone with significant technical chops to setup and a server to run it on and you can't use our server because our server can only send email to accounts on campus (although I might note that at the current time (01/07/12) it does indeed send mail off campus -- this however is not so much by design as by oversight I suspect and could end at any time as off campus email is supposed to use SSL or TLS connections to the mail server and psy1 doesn't). You'll also probably be limited to sending email to a local account unless you have an email server that takes unauthenticated requests. Here it's not a problem as on campus machines trust each other, elsewhere this may not be the case.  Of course, you don't actually have to email the results, you could just write them to a file, I'm less interested in dealing with that level of maintenance however (well, until I developed the third method listed below that does exactly that).  The poster program I wrote is in the second communications test http://www.u.arizona.edu/~jforster/dmdx/commstest2.exe and is called poster.exe (it's available in DMDXUTILS.ZIP).  It takes a -h option for the host server to post to and a -p port option like the sendemail.exe program and then first argument is script name to post to (it uses HTTP 1.1 so multi-homed servers are fine) and the rest are form control names and their values where if a value is a name of a file it will send the contents of the file instead of the value.

start /wait "DMDX" dmdx.exe -ez -buffers 2 -run commstest2.rtf
if not exist commstest2.azk goto diags
poster.exe -hyour.server.org /cgi-bin/bsdemailer email=youremail subject=poster%%20testing -iemailaddr results=commstest2.azk
if errorlevel 1 pause
goto end
:diags
poster.exe -hyour.server.org /cgi-bin/bsdemailer email=youremail subject=DMDX%%20failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel 1 pause
:end


The final Luxury Yacht solution.

    So with all the data coming in one email at a time experimenters rapidly discovered that keeping all the data straight and concatenated into the correct .AZK file was actually quite a bit of work, prone to error to boot and the call went out for something superior.  So I made a new CGI-BIN called UnloadAZK4web that takes the heart of UnloadAZK and buries it in a shell of my bsdemailer that stores the data on the server (http://psy1.psych.arizona.edu/DMDX/ or http://psy1.psych.arizona.edu/cgi-bin/unloadazk4web) that experimenters can then download with their web browsers.  As a backup it can email the data to an experimenter just in case the server tanks (or someone accidentally tells the server to nuke the data, see below).  This also spawned a request for a more rigorous timing method than DMDX's EZ mode so a new auto mode was created that trusts the refresh rate the operating system says the display is running at and if the OS doesn't say it goes with 60 Hz.

    The problem here is that we have no control over names of the experiments and a name collision would have two experiments combining their data.  Probably not catastrophic as item numbers would in all probability be different however very messy to recover from.  So the new CGI generates an MD5 hash from the item numbers used in an experiment and appends that to the name of the item file.  Which is fine if your experiment always executes the same items every time it runs, however things like maze tasks (or my communications test) don't so an additional control can be used to override the data used to generate the hash (called hash of course).  We've been using the .RTF file for the hash so that any subtle changes in the item file not reflected in the name of the item file will generate separate data but you could also use any arbitrary string.  Indeed there's some argument for using arbitrary strings as people are finding the multiple new files spawned from trivial edits irritating.  You'd also have to use the hash control if your experiment produced a .ZIL file as I'm fairly sure UnloadAZK4web won't be pulling item numbers out of a .ZIL file (it should however concatenate the .ZIL data files well enough).  Which coincidentally exposed a bug in poster 1.1.0 where if you used two file controls the second one wouldn't get sent so one has to be careful to use poster 1.1.1 (or later, the current version is in DMDXUTILS.ZIP) if one is using a file for the hash control.  And then of course there's the determination of the item file's name, it's not actually transmitted (results=commstest2.azk means send the contents of the file commstest2.azk not the text) so DMDX 4.0.4.2 when invoked with -EZ or -auto spews the item file name in a comment in the .AZK (or .ZIL) and UnloadAZK4web looks for it.   If you don't use DMDX 4.0.4.2 (or later) then UnloadAZK4web will use the subject as the first part of the file name (before the MD5 hash) and if you don't include a subject it will just use the hash for the name (meaning you'll have to guess which file on the server has your data).  I would also note that the version of DMDX in that package is pretty ancient and doesn't account for the new Direct3D renderer needed for Windows 8 and 10 and so on so if you use that package as a basis for your experiment you'll want to at least grab the latest version of DMDX, critically so if you intend to use tachistoscopic timing.  So the new script used for UnloadAZK4web testing is:  

start /wait "DMDX" dmdx.exe -auto -run commstest2.rtf
if not exist commstest2.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=myemailaddr subject=unloadazk4web%%20commstest2%%20testing -iemailaddr hash=commstest2.rtf results=commstest2.azk
if errorlevel 1 pause
goto end
:diags
poster.exe /cgi-bin/bsdemailer email=myemailaddr subject=unloadazk4web%%20commstest2%%20failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel 1 pause
:end

    Note that it passes the name /cgi-bin/unloadazk4web when posting the data versus /cgi-bin/bsdemailer when emailing me a diagnostic of a failure (there's not much point sending diagnostics to UnloadAZK4web).  There are several errors that UnloadAZK4web can throw, it will prepend FAILURE: to the subject when it does throw one and will append a failure control at the end of the email with more detail.  Typically unless the failure also has WARNING: after it the data won't have been stored on the server.  For now UnloadAZK4web will pretty much append any text file regardless of whether it's an .AZK file or not (meaning you could in fact toss diagnostics at it but then your subject count would be off and you'd have to cut the contents out before ANALYZE ever processed it which kinda defeats the purpose of making a script to lower the amount of cutting and pasting an experimenter has to do).  If we see abuse of such glasnost then UnloadAZK4web will start rigorously parsing for .AZK (or .ZIL) components and if they're not found it will reject the post, as it is UnloadAZK4web will purge data files older than 6 months and the directory listing will warn that a file is about to be deleted once it's more than 5 months old).  For others without a UofA email address they'd use a script more along the lines of this (that won't attempt to send any email backup data -- although as noted earlier psy1 is indeed capable of sending email to the wider world right now (01/07/12), exactly how long that will last is anyone's guess however):

start /wait "DMDX" dmdx.exe -auto -run commstest2.rtf
if not exist commstest2.azk goto end
poster.exe /cgi-bin/unloadazk4web hash=commstest2.rtf results=commstest2.azk
:end
if errorlevel 1 pause

    And then there's the issue of testing.  Say you're testing the package and it works and it's sent data to psy1 and then you want to start collecting real data but there's this file on psy1 now that's got test data in it.  We've included the ability for you to nuke data in files you've caused to be on psy1 by allowing you to send a poster command to UnloadAZK4web that has the control delete instead of the control results.  You'll need to send it a sample .AZK file because that's probably the easiest way to get the item file name to UnloadAZK4web (you could send it in the subject if you weren't using the items in the .AZK for the hash and if you didn't have the item file in the directory you execute the poster command from).  So from a command prompt in a directory that has poster.exe and the item file and at least one .AZK in it this command line could be given to nuke the old data:

poster.exe /cgi-bin/unloadazk4web hash=commstest2.rtf delete=commstest2.azk


Reliability.

    After having the remote testing capabilities up for a while it was noticed that DNS was flaky for psy1.psych.arizona.edu so if a subject's script was trying to post data to it and DNS happened to be down at that moment the data would be lost.  The quick fix was to use -h128.196.98.40 in the poster command lines so it no longer had to use DNS to resolve psy1.psych.arizona.edu to 128.196.98.40, the longer term fix was update poster.exe to use this automatically and to also retry a number of other internet related functions.  The versions of poster.exe in the previous examples haven't been updated so if you are going to build your own remote testing setup I recommend using the latest version (1.2.1 as of writing) in the DMDXUTILS.ZIP package or to use the one in the reliability test itself (http://www.u.arizona.edu/~jforster/dmdx/reliabilitytest.exe).  Which by the way has a substantially nicer script from the users perspective that fully breaks out failures and could even be expanded to attempt to educate the user on sending their data in manually if someone cared to (by either echoing the file to the screen and using the clipboard instructions in the script already or by telling the user the location of the file and so on).  However I'm guessing such efforts aren't needed at the moment, so far we have 100% reliability from all corners of the web using 128.196.98.40 (as far as communications are concerned, people can still have machines that can't run DMDX).

start /wait "DMDX" dmdx.exe -auto -run reliabilitytest.rtf
if not exist reliabilitytest.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web%%20reliabilitytest%%20testing -iemailaddr hash=reliabilitytest results=reliabilitytest.azk
if errorlevel 1 goto error
goto success
:diags
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=unloadazk4web%%20reliabilitytest%%20failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel 1 goto error
:success
echo off
echo .
echo .
echo .
echo The test was a success, thank you for helping improve DMDX.
goto end
:error
echo off
echo .
echo .
echo .
echo Alas the communications failed. Please copy the error messages above and
echo email them to some@email.address. (Clicking the C:\ icon in top
echo left corner and selecting Edit / Mark will allow you to highlight text
echo in this window with the cursor and Enter will copy it to the clipboard).
:end
echo .
pause

    And then while this is DNS stuff all going down Thomas Schubert offered us the use of his server in Germany (scriptingrt.net) as a backup UnloadAZK4web server and after a few server tweaks and a number of tweaks to UnloadAZK4web and a bit of new functionality it now runs on his server as well as psy1 (many thanks to Thomas).  This means that remote testing setups can either post their data to both servers or post to one if the other is failing.  The trouble with posting data to both servers is determining just what data went where and what's duplicated if one of the servers was down or unreachable for any number of subjects.  Given the recent improvements to posting data to psy1 where DNS failures no longer cause data loss I'm recommending people post first to psy1 and only if it fails go on to post data on Thomas' server.  For people that host their experiments on the arizona.edu servers the likelihood of psy1 being down and those servers being up is even lower than just plain old internet failures, but it can still happen.  Among the differences between the servers is that scriptingrt requires the extension .cgi on it's CGI files and it doesn't require them to live in a cgi-bin folder so the URL for Thomas' server UnloadAZK4web data file listing is http://scriptingrt.net/unloadazk4web.cgi.  Then there's the auxiliary decision which server to post to first (assuming you're not going to post to both).  scriptingrt is in Germany so if you're testing on that continent perhaps communications are less likely to fail to it. I haven't noticed any routing flaps in the US for the last few years so continental differences may be moot.  Still, you may decide to post first to scriptingrt after all as it is not subject to the whims of campus sysadmins who may at a moment's notice decide they're fed up with allowing on campus SMTP connections to go through without authentication -- which is pretty much going to kill off off campus use of psy1 if people need the email acknowledgement that UnloadAZK4web sends out each time data is stored.  Then again, sciptingrt is subject to Thomas continuing to lease the server and pay for it's domain name. You also need to post to it's DNS name instead of it's IP (like the default psy1 post is these days) because it's likely to be a multi-homed server (many sites, one IP address).

     Of course, having put all the extensive retries into poster.exe any failure to post to psy1 is going to take a good fraction of an hour to expire (one I tested today was over a half hour) so I have altered poster.exe again (now version 2.1.0) to allow a specification of the number of retries (-r) to attempt.  Here we can whip off a couple of quick attempts first to psy1 and then to scriptingrt and if either of them is up an running they'll succeed.  Then if they both failed fall back on extensive retries to both servers and hopefully one of them comes up during the time it takes:  

start /wait "DMDX" dmdx.exe -auto -run redunancytest.rtf
if not exist redunancytest.azk goto diags
poster.exe -r1 /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web%%20redunancytest%%20testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto fallback
goto success
:fallback
poster.exe -r1 -hscriptingrt.net /unloadazk4web.cgi email=some@email.address subject=unloadazk4web%%20redunancytest%%20testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto moreretries
goto success
:moreretries
poster.exe /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web%%20redunancytest%%20testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto moreretriesfallback
goto success
:moreretriesfallback
poster.exe -hscriptingrt.net /unloadazk4web.cgi email=some@email.address subject=unloadazk4web%%20redunancytest%%20testing -iemailaddr hash=redunancytest results=redunancytest.azk
if errorlevel 1 goto diags
goto success
:diags
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=unloadazk4web%%20redunancytest%%20failure -iemailaddr diagnostics=diagnostics.txt
if errorlevel 1 goto error
:success
echo off
echo .
echo .
echo .
echo The test was a success, thank you for helping improve DMDX.
goto end
:error
echo off
echo .
echo .
echo .
echo Alas the communications failed. Please copy the error messages above and
echo email them to some@email.address. (Clicking the C:\ icon in top
echo left corner and selecting Edit / Mark will allow you to highlight text
echo in this window with the cursor and Enter will copy it to the clipboard).
:end
echo .
pause

 


Windows 8

     And then Microsoft go and release Windows 8 which doesn't actually contain DirectDraw but instead emulates it and that DMDX uses to manipulate the screen so I had to go and craft version 5 of DMDX that has an optional Direct3D renderer in it.  People have two choices here, one use the new version 5 binaries and let DMDX choose which renderer it wants to use based on the OS it finds itself running on or just force DMDX to use the Direct3D renderer with -d3d on the command line.  At this stage I'm fairly sure the second option is viable unless you're looking at testing on some very ancient hardware, I've setup an example using it that will spew diagnostics at me if it fails but there's already been fairly wide spread testing of this and no significant issues have arisen lately.  It also uses the relatively new <prose> and <instruction> keywords that make typing and displaying text more hospitable to different display dimensions and international keyboard differences.  If people go with the automatic route they can tell which renderer was used by looking at the video mode diagnostics as when Direct3D is being used the code D3D will occur before the Video Mode text in the output file:

**********************************************************************
Subject 1, 06/03/2014 10:23:13 on 666-DEVEL, refresh 16.67ms
Item RT
! DMDX is running in auto mode (automatically determined raster sync)
! D3D Video Mode 1280,1024,24,60
! Item File <commstest4.rtf>

     If people are interested in the diagnostic spew there's the batch file that runs that test:

start /wait "DMDX" dmdx.exe -auto -d3d -ignoreunknownrtf -run commstest4.rtf
if not exist commstest4.azk goto diags
poster.exe /cgi-bin/unloadazk4web email=some@email.address subject=unloadazk4web%%20commstest4%%20testing -iemailaddr hash=commstest4.rtf results=commstest4.azk
goto end
:diags
if not exist diagnostics.txt goto systemdiags
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=poster%%20commstest4%%20failure -iemailaddr diagnostics=diagnostics.txt
goto end
:systemdiags
echo off
echo .
echo .
echo .
echo It would appear DMDX has failed to run at all. Please wait while we
echo gather some diagnostic information to help us improve DMDX. If you
echo don't wish to provide us with such information hit CONTROL-C now.
echo Otherwise hit space to continue...
pause
echo on
msinfo32 /report systemdiags.txt
cmd /a /c type systemdiags.txt>systemdiagsansi.txt
poster.exe /cgi-bin/bsdemailer email=some@email.address subject=poster%%20commstest4%%20system%%20failure -iemailaddr diagnostics=systemdiagsansi.txt
:end
if errorlevel 1 pause

 


Online subject recruitment.

    While tangential to DMDX itself its come up a few times so perhaps it bears mention here but Amazon's Mechanical Turk is a convenient way to recruit online subjects for studies.  Linguists have been using it for some time so I include comments made by one of them that has run a few DMDX remote testing auditory tasks (task specific information has been removed or paraphrased in square brackets):

My MTurk experiences have been varied. I usually aim for twice as many participants as I want, because I need to exclude non-native English speakers, people who didn't do the task correctly, people who are trying to scam me, etc. One thing I was told is the data is cheap, so you should always pay for more than you need because only a portion of the data you'll get isn't garbage. But here's a few quick thoughts on what I understand of your situation:

Number of Participants: In the lab on campus, I'd expect I couldn't use 10% of subjects (but linguists are a little more picky than psychologists are). Online, I usually ask for twice as many as I need. I'd definitely ask for at least the maximum number of subjects given the power you want in the study.

Duration: My auditory experiments were about 10-15 minutes. I had attended a MTurk workshop and they suggested that was a good task length for people doing MTurk over their lunch breaks and such. I know people who have gotten away with 45 minute experiments though and many tasks are shorter (1-5 minute range).

Payment: I pay $1 for the 10-15 minute experiment. My colleague paid $3 for the 35-45 minute one. Paying too much will attract scammers and make people think there's something fishy about the assignment. Too little and no one will do it. I usually check to see if there are similar tasks online before I set a payment in case I need to do a little higher or could do lower. You can search for "psychology" and "experiment" or "survey" keywords to see your competition.

Quality Control: One important thing to do with MTurk is include a quality control measure. For behavioral experiments, I can look at reaction time or accuracy and discard bad subjects that way. People often add in a "trap" demographic question like "Answer 'yes' to this question" or something to ensure people aren't just zooming through the task. ... I'm also leery of putting a lot of restrictions on subjects, like "must be native English speaker, must be right handed, must be 18-32, etc." People might lie to say they're eligible for the study figuring you'll never know the truth, but if it doesn't affect their payment they'll answer demographic questions honestly (and then you can exclude the participants after you get their data).

Location and other restrictions: You can restrict the subjects' location by IP address, setting it to US only will ensure you're not getting everyone from India that you may or may not want. You can restrict things to a "master" status, which will get more trustworthy data (supposedly) but you might need to pay more to entice the turk experts to do the task. There are other restrictions you can use, like only people who have successfully completed 50 tasks, or something which might also get higher quality respondents.

Comments box: Subjects love the comment box, so make sure you include one. Also you might get some ideas on why something failed, what was confusing, difficult, etc.

Secret password: On the lines of quality control, many MTurk experiments are done on separate software (like DMDX). The easy way to interface between the Mturk and your software is to require subjects to either enter their MTurk worker ID into a box and/or a password that the experiment reveals at the end. So subjects need to [actively participate] all the way through, maybe the secret password could be added in to the end of the [task] somehow ... Some experiments use both.

Timing: I'm not sure if its true, but I was told to put experiments online around 7am New York time. That way your task is "fresh" for the day. It may be outdated advice, but it would mean as people wake up across the US the task will still be relatively new. For 15-min experiments, I could easily get 80 participants in 3-6 hours this way. My colleague who did the 45-min experiment got 64 participants in less than 24 hours.

Try a few out: My final suggestion would be so spend an hour on MTurk as a worker, search for surveys and experiments, and earn a buck or two. It lets you see how other labs are doing things and you can model your own HITs off theirs.

 


Winzip alternatives.

    And then to add insult to injury after buying the Winzip Self Extractor Package I find an article about a hidden built in installer in XP already -- but it would appear to be hobbled by the fact that it's a 16 bit application, phew.  However there's also 7zip out there that would appear to have self extracting capabilities, I looked it over but without using one or two other add-ons it doesn't look like it's easily useable (one not only needs it to auto extract files, one needs it to execute a command, ostensibly to start the installation but in our case to run the test).  More work is involved than is worth it to not pay WinZIP for the software installation add on (for us anyway).

 



DMDX Index.