Introduce Yourself:
Hi! My name is Mikal Hayden - Gates and I am a rising senior at New Mission High School in Hyde Park. I am 16 years old and look forward to turning 17 in August. By participating in Project Success I hope to gain lab experience and really act upon my interests in science. I plan on majoring in Biology, specifically Bioinformatics, in college. In my spare time I like to hang out with friends and listen to music. Some of my favorite bands are Bastille, The 1975 and Arctic Monkeys (weird name I know!). Overall I'm really excited to be apart of Project Success this summer and look forward to working in a lab.

Interesting Fact About Reading:
Visual Search is an actual field of study which is interesting enough for me! When I hear vision research I automatically think of eye doctors and the simple tests you did a the hospital to make sure you could see. My reading was a general summer of visual search tasks and how the data collected is analyzed.

Introduce Your Lab:
This summer I will be working in Dr. Jeremy Wolfe's, Visual Attention Lab in Cambridge. The lab mainly focuses on visual attention and the different factors that it can be influenced by.

Introduce Your Project:
My project mainly focuses on how the movement of objects in a 3D scene affects the efficiency of a visual search. I created 3D shapes in Blender and began making scenes in the computer program Unity.

Weekly Assignment 1 (July 3):
Hypothesis: the movement of objects in a 3D scene can affect efficient a visual search for a target object is. An efficient search can be defined as easily directing attention tot he target object versus looking through each object.
a. I will create 3D shapes in Blender
b. I will create an interactive visual search using Unity in which tests subjects will look for a target object amongst other moving distractor objects.
c. I will run the interactive visual search on different test subjects/
d. I will analyze the data using Microsoft Excel and R.

Weekly Assignment 2 (July 11):
a. Creating 3D shapes in Blender
Blender is a 3D modeling software that supports the entire 3D pipeline which includes: modeling, rigging, simulating, rendering and animation. The modeling and rendering features of Blender were used to build the 3D figures that were exported into Unity; another computer software. The pre-made shapes in Blender (torus, sphere, cylinder, cone and cube) were then used as “building blocks” to create a total of twelve 3D shapes and one target object. After creating the 3D shapes, each shape was copied and colored blue, green, orange, pink, purple, red and teal. Lastly, each shape was saved as a “.blend” file and imported to the Assets folder in the Unity computer program.

b. Creating interactive visual search experiment using Unity 3D
Unity, like Blender, is a 3D modeling software that can be used to create interactive 2D and 3D content. The 3D shapes imported from Blender were set up on a flat plane and given a specific motion through a script. Each scene was either uniform or random. Uniform scenes had several of the same object moving in unison with each other except for the target object (which was moving differently). A random scene was similar to the uniform scene; all of the objects were the same except each one was moving differently (the target being one of those objects).
d. Data Analysis
After running the experiment on test subjects and collecting their data. Using both R and Microsoft Excel I will analyze this data through various visual representations such as bar graphs, pie charts etc.

Weekly assignment 3 (July 18):
Introduction (Rough Draft):
People often perform simple visual search tasks daily without even noticing. Whether it is as simple as looking for keys in a cluttered room or targets in a video game, visual search is common part of life. In addition to being apart of a majority of people’s daily routine, visual search can also be found in a professional setting. Some search tasks hold socio-important functions such as a security guard looking for potentially dangerous items in CCTV images or a doctor looking for tumors in a mammogram1. Developing a systematic way to efficiently search for a particular item among distractors would help improve the efficiency of socio-important visual search tasks.
A standard visual search tasks consists of subjects looking for one or more targets amongst distractors items. The difference between targets and distractors can vary, making the task easier or harder. Targets that are obviously different from the distractors and seem to “pop out” of the display make the task easier, while targets that seem to blend in with the distractors makes it harder or less efficient. An efficient search can be defined as easily being able to direct attention the target object rather than needing to look through each object in the set. The overall efficiency of the visual search task is measured by comparing the reaction time (RT) to the set size, which is the number of items in the display. Less efficient searches create a steeper line because it takes longer per item in the display to find the target; easier tasks create a flatter line.
Scientists tend to test one or two important features at a time when studying human visual system. These features can include movement, color, size and lighting. Although important discoveries of how the human visual system operates have come out of pinpointing different features of a visual search, lab-based displays do not always reflect the complexity of a real life visual1 search that includes a majority of these features all at once.

1. Kunar, M. A., & Watson, D. G. (2014). When are abrupt onsets found efficiently in complex visual search? Evidence from multielement asynchronous dynamic search. Journal of Experimental Psychology: Human Perception and Performance, 40(1), 232–252. doi:10.1037/a0033544

1.When Are Abrupt Onsets Found Efficiently in Complex Visual Search? Evidence From Multielement Asynchronous Dynamic Search; Melina A. Kunar & Derrick G. Watson
*This paper addresses the different features of a visual search that can be manipulated. For example, in Kunar and Watson's experiments targets were either moving, static or blinking (sometimes a combination of all three).

Weekly Assignment 4 (July 25):
I haven't finalized my experiment yet so I don't have any data to graph but here is an example of what one might look like ...
external image RTs.jpg

Note from Melissa: Welcome to your Project Success homepage! We've matched you with Dr. Matthew Cain in the lab of Dr. Jeremy Wolfe at the Visual Attention Lab. Your reading, Chapter 7: Attention and Scene Perception from Sensation and Perception, by Wolfe et al, 2012, will be sent to you by e-mail.

We've also matched you up with another mentor outside of the lab to help support you through the summer and give you further exposure to science careers. Your mentor is Elizabeth Carpino.

Subject Author Replies Views Last Message
Graph mpwu mpwu 0 11 Jul 30, 2014 by mpwu mpwu
Music miloskylow miloskylow 0 17 Jul 3, 2014 by miloskylow miloskylow