You should try to sign up to be a Mechanical Turk Requester ASAP. If you are unable to get an account, you can partner with a student who got one.
In this homework assignment, you’re going to learn how to be a requester on Amazon Mechanical Turk. You should try to sign-up as a requester as soon as possible, because it is a multi-step process and each step can take time. If you’re unable to sign up as a requester yourself, we recommend partnering with another student who has successfully signed up. You will work in pairs.
Once you are a requester, you will be able to post work on MTurk and to pay workers to complete your tasks. To complete the work outlined in this assignment, you need to pay about $26. If paying $26 of your own money presents a financial hardship to you, then please email your professor.
We will be asking workers to label images for us, similar to how Fei-Fei Li created ImageNet.
In this assignment, you will ask workers to classify images on whether or not the image depicts a wedding. These images will be representative of both Western and Indian cultures. You will create three sets of HITs. The first HIT set will be a small task with 17 sample images and is intended to be completed as a tutorial on how to make a HIT. The other two HIT sets will be completed with a larger dataset of wedding images. The second HIT set will be completed by workers located in India and the third HIT set will be completed by workers located in the US. Your results from these two HIT sets will be used in the following HW5 assignment, in which you will train a classifier solely on the India-based results and another classifier solely on the US-based results. During HW5, you can observe the differences in the two classifiers.
Let’s get started with creating your first HIT set on MTurk. Again, this HIT set is intended to be completed as a tutorial for students to learn how to correctly make HITs on Amazon MTurk. After logging into your requester account, go to the Create Tab and then click New Project. We’ll be labeling images, so you can start with the default Image Classification template (shown below).
Select Image Classification and then click on Create Project. You will see 3 tabs:
In the Enter Properties tab, you should change the following fields (you can leave the remaining fields unchanged):
In the Worker requirements section of the Enter Properties tab, add the following qualifications that workers have to meet in order to do the task:
In the Design Layout tab, edit the HTML for the categories, header, and name. Additionally, edit the short-instructions and add full-instructions. Below is a screenshot with an example of instructions. Workers will be able to read these instructions to better understand what to do in the task.
After making those edits, click on Save and then Preview to see what your HIT will look like to workers. Since you haven’t yet uploaded any information (like the URLs for the images that you want workers to judge), there will be a placeholder saying “Image will display here”.
If you’re satisfied with how the HIT looks, click the Finish button. You’ll then see your newly created task listed with a big orange button saying “Publish Batch” next to it. You can publish a batch by clicking that button and uploading a comma separated value (CSV) file with the inputs to the HIT. The popup screen that appears will give you a link to download a sample .csv file that shows what fields you need for your HIT. For this HIT design, all we need is a single column with the header image_url, and a list of the image URLs that we want workers to judge. Here’s a small CSV file that you can use to test your HIT set. Note: this is a sample .csv file that should be used only for this first tutorial HIT set. You will use a larger .csv file containing all of the wedding photos in your other two HIT sets. This larger .csv file that you will create will have a different format than this sample .csv file, as explained in the instructions below.
After uploading your .csv file, you can preview the HITs with the data populating them. There’s a “Next HIT” button that will let you click through and preview multiple assignments. This is useful to confirm that your image links are all working properly.
If you’re happy with how the HITs look, click the big orange “Next” button at the bottom. You’ll then see a summary screen that gives details about the HIT including how much it will cost. Publish your task for Turkers to work on by clicking on the big orange “Publish” button.
Your HITs will be posted to MTurk. Once work has begun, you can monitor progress in the Manage tab. You’ll see a green progress bar showing how many of them have been completed.
You can see the individual responses by clicking on the “Review Results” link above the progress bar. On this screen you’ll see:
Notice that the three workers all said that the first URL did not show a wedding. This is the image that they said doesn’t show a wedding:
Three workers said that the second URL did show a wedding. This is the image that does show a wedding:
The Review Results screen will let you approve or reject the Workers’ submissions. It is recommended to automatically approve all the assignments for this first tutorial HIT set.
You can download all of these results as a .csv file. Here is the results file from when Professor Callison-Burch created a tutorial HIT set with the 17 sample images. Note that the .csv file has many additional fields that include information about your HITs, such as the properties that you specified and information about the amount of time that workers took to complete each assignment. Columns in the results .csv that start with “Input.” are the variables that were in the .csv file that you uploaded. Columns that start with “Answer.” are the answers that the Turkers provided.
This assignment is directly connected to HW5. In HW5, you will create classifiers trained on the images and labels you receive from your HITs. As one classifier will be trained solely on the US-based results and another classifier solely on the India-based results, it is expected that there will be differences between the two models. The goal of these two assignments is to highlight the ability for human bias and beliefs ingrained in the training data to influence the predictive nature of ML models, which can cause unintended and sometimes negative consequences.
AI encodes and magnifies bias, and Google researchers found that ImageNet and another popular dataset called Open Images “appear to exhibit an observable amerocentric and eurocentric representation bias,” as demonstrated by the distribution of geographically identifiable images in the datasets, with 2/3 of the images from the Western world.
In addition, classifiers trained on the datasets show “strong differences in the relative performance on images from different locales”, with lower accuracy and confidence on images with labels related to people, like “bridegroom” and “police officer”, from countries like India and China. The research helped inspire the Inclusive Images Challenge, run by Google in partnership with a top deep learning conference called NeurIPS, back in 2018.
A different large-scale crowdsourced dataset, The Massively Multilingual Image Dataset (MMID), was created by Penn researchers to learn English translations for words in 100 foreign languages by scraping images for each foreign word and finding the English words that had the most “similar” images.
MMID contains around 100 images for around 10,000 words in 100 foreign languages, providing an interesting source of data for improving the “geodiversity” of image classifiers. However, the images for an English translation of a foreign word can be noisy, as shown by crowdworkers who evaluated the relevance of images for a large subset of translations in 3 languages.
!unzip "weddings-indian-languages.zip"in a new cell. The dataset is composed of around 200-1000 images per language, for 8 languages spoken in India (Bengali, Gujarati, Hindi, Malayalam, Marathi, Punjabi, Tamil, and Telugu), taken from MMID. Repeat with the “Weddings European Language” dataset. You will submit the URL to this Colab notebook on Gradescope.
Below are the questions that you will be asked to answer about this assignment. Please turn in your answers in a PDF for Homework 4 on Gradescope.