Technology Bases Learning with Disability
ssi_nav_IGERT

Research

Picture of Stephanie AuldStephanie Auld

auld.2@wright.edu

Barker Code Investigation

Advisor: Dr. Julie Skipper

     

 

 

Stephanie received her B.S. in Biomedical Engineering at Wright State University in 2007.  She currently serves as member of both the Dean's Student Advisory Board and the BIE Department's Student Advisory Board.  She is doing research on an innovative tactile-sensory color representation scheme for people with visual impairments called the Barker Code.  She is also doing research on a bone mineral density measuring tool that would allow for inexpensive and widespread osteoporosis screening.  Stephanie's PhD work will be in the field of biological and medical systems with a focus in medical imaging.

 

Research Synopsis:

    People with visual impairments or blindness experience many components of their lives in a manner different than typically sighted people. However, sensory impairment need not be a limiting factor in the appreciation of the arts. Sally Barker has given much interest into creating a means through which people with visual impairments or blindness can experience fine art more directly. Her suggested approach is through the use of a tactile-based color representation scheme that she developed known as the Barker Code (B-Code).

 

b_code drawing The B-Code is made up of three components. The first of which is a textile correlation to a specific color (i.e. silks and satins are used to represent red). The second component defines the intensity of a given color. The intensity is represented by varying levels of hardness achieved through the use of a range of battings placed behind the fabric. The batting ranges from cotton to cardboard to represent light to dark respectively. The third component involves proper utilization through tactile sensation. Various fine art pieces that are reproduced using the appropriate textiles as defined by the B-Code are felt by the person with a visual impairment or blindness. The B-Code is intended to allow people with visual impairments or blindness to experience, through tactile sensation, similar feelings as those evoked by the visualization of colors for the typically sighted.

 

The purpose of this research study is to attain data to quantifiably distinguish and understand the quality of the subjects’ experience with and appreciation for fine art pieces with and without the support of the B-code. The interactions of the subjects and their companion with the art pieces and with each other, with and without the presence of the B-Code present, will be videotaped and analyzed retrospectively.

 

Ronald Butcher

picture of ron butcherbutcher.4@wright.edu

Advisor: Dr. S. Narayanan

 

 

 

 

Ron received his B.S. in Biomedical Engineering from Wright State University, Dayton, Ohio, 2005.  Ron has volunteered at the Shiloh House Adult Day Services to understand different disabilities and work with individuals to help them learn and thrive.  Ron’s research involves investigating the way that older adults acquire knowledge and skills in the area of human computer interaction.   His hypothesis is that the use of specific games (both computer and non-computer games) can facilitate the learning process for the development of computing skills in senior citizens.

 

Jennifer Border

picture of jennifer borderborder.4@wright.edu

Alternative Computer Interface Workstation Design

Advisor: Dr. Wayne Shebilske


 

Jenny received her B.S. in Psychology in 2008 from Wright State University.  She is currently working on her M.S. in Human Factors Psychology.  Her research compares different assistive technology devices in relation to efficiency in typing documents.  Specifically, she is looking at brain computer interface devices, eye gaze, and voice recognition.

 

Research Synopsis:   

brainfinger pictureThe project involves finding an alternative computer interface and designing a workstation that incorporates the interface. The alternative interface is needed due to a repetitive motion injury that is developing from the current interface and can cause physical damage in the next two years if it is not resolved. Three different computer interfaces (Brainfingers, eye gaze, and voice recognition) will be thoroughly examined through different typing tasks and typing rates. The interfaces will be compared upon numerous factors (efficiency, typing rate, learning curve, ease of access, social contexts, etc.). It is uncertain what interfaces will work best. It may be discovered that one interface may work better for home use and another for traveling. In addition to finding an interface a workstation will need to be designed that incorporates all of the physical constraints of the individual's body and repetitive motion injury. 


picture of alternate computer interfaceI also worked on a project over the summer in the STREAMS program with Dr. John Flach on the brain computer interface, Brainfingers. It is a brand name for a communication and control tool that consists of hardware and software that enables user's neural responses to be converted into signals that control a mouse and a keyboard. In the project, Brainfingers was used to examine typing speeds on two different onscreen keyboards, WiViK and Gaze Talk, and the learning curve of each keyboard. WiViK is an on-screen keyboard that is identical to a standard keyboard with the exception of an extra line of keys that have predictive text. Gaze Talk is a predictive text keyboard that has 8 different boxes that contain the next predicted word or character. Brainfingers was used for cursor control on the keyboard. To move the cursor vertically facial muscle signals from clenching my jaw were used and horizontal movement was acquired through flick in eye movements. There were not any statistically significant changes in learning curve over a three month period using either keyboard. However, there was a moderate change in typing speed using Gaze Talk.

 

 

 

Jeannine Crum

picture of jeannine crumcrum.12@wright.edu

Advisor: Dr. F. Javier Alvarez-Leefmans

 

 

 

 

Jeannine Marie Crum is a student in the Biomedical Science PhD Program.  Jeannine received her M.S. in Physiology and Neuroscience from Wright State University and B.S. in Psychology from The Ohio State University.  She volunteers to assist the Chair of both Ohio Brain Injury Advisory Commission and Ohio Legal Rights Services with special projects. Jeannine’s previous research includes spinal cord and peripheral nerve injury, and she intends to study neurogenic pain and motor disorders following spinal cord injury during her PhD.

 

 

Maurissa D'Angelo

picture of maurissa d'angelodangelo.2@wright.edu

Analysis of Amputee Gait using Virtual Reality Rehabilitation Techniques

Advisors: Dr. S. Narayanan, Dr. David Reynolds

 

     

 

Maurissa received her B.S. in Biomedical Engineering from Case Western Reserve University, Cleveland, Ohio, 2004.  She received her M.S. in Human Factors Engineering from Wright State University, Dayton, Ohio, 2006.  She is currently working on her dissertation studying the effects of proprioception and virtual reality on rehabilitation.  She volunteered at United Rehabilitation Services and currently volunteers at the Lifespan Health Research Center where she works to help rehabilitate individuals with disabilities. 

 

Research Synopsis:

     It is hypothesized that through appropriate visualization methods, amputees will be able to more effectively and efficiently ambulate with more symmetrical gait through improved stride length, more equal weight distribution between limbs and a more narrow and improved base of support.  The purpose of this study is to evaluate this hypothesis and the effectiveness of virtual reality training for lower limb amputees after conventional amputee rehabilitation has been completed. 

    This study will involve tracking an individual’s movements and displaying the individual moving in the form of an avatar in a VR scene.  Capabilities such as body movements, kinematic data recordings and adaptations to the user’s environment will be employed.  The first stage of this study will involve an amputee individual in a training scenario.  The individual with a prosthesis (supported by a harness) will wear kinematic markers including a head tracking device and walk from one side of the room to the other along the harness rail gait line.  This process will be repeated for a predetermined amount of time.  The individual will then stop his/her walking and use an HMD to see a visual feedback recording of an avatar in his/her likeness performing the task he/she performed.  The researcher/therapist will then discuss with the individual his/her gait deviations as the individual is watching the avatar walk back and forth.  The individual will then walk from one side of the room to the other along the harness rail gait line attempting to correct the gait deviations seen in the HMD as discussed with the therapist.  This process will continue for a predetermined number of times.  Parameters studied will include symmetry of gait, pressure distribution, range of motion and muscle strength. 

     The needs of amputees have been studied and currently a system to incorporate chronic and repetitive exercises and functional real world demands is being developed to test the effectiveness of a virtual reality rehabilitation system for amputees. 

 

 

Mel Futrell

picture of mel futrellfutrell.6@wright.edu

Advisor: Dr. John Flach

 

     

 

 

Mel received her B.A. in Music and English/American Literature from Washington University in St. Louis in 1993, and her M.A. in Communication Management from the University of Southern California in 1996. She worked for over a decade in the recording and concert industries of St. Louis, San Francisco and Los Angeles before returning to school in 2002 for a second Bachelor’s Degree in Psychology, conferred by California State University Northridge (CSUN) in 2004. She completed the coursework for her M.A. in Human Factors Psychology at CSUN, and has begun the PhD program at WSU while she finishes her thesis. As a pilot and instructor, Mel focuses her research on designing for a multicultural aviation population, which includes Deaf and hard-of-hearing pilots.

 

 

Allison Gadd

Picture of Allison Gaddgadd.4@wright.edu

Pneumatic Muscle Augmentation in a Sit-to-Stand Device for the Elderly

Advisor: Dr. Chandler Phillips

 

 

 

Allison received a B.S. in Biomedical Engineering from Wright State University in 2004; she received her M.S. in BME in 2007. She is currently working on her Ph.D. and helping to create a powered orthotic device in order to help strengthen the quadriceps muscles


Research Synopsis:

     While in-house built pneumatic muscles (PM) have been researched and used in various assistive devices, commercially designed PMs have not. Previous research at Wright State University has characterized the FestoTM PM, and current research is being done to combine that information with controls in order to get the PM to respond quickly and safety to act closer to what a human would need for augmentation. This assistance is needed as many times elderly individuals do not have the ability to stand up under their own muscle strength. This inability often causes falls and these people to be institutionalized; therefore in order to try and keep the elderly more mobile and in their own homes longer, rehabilitation and possibly individual devices would be needed to help with this issue.

 

     In a controlled set-up, this augmentation would be completed by determining what the individual could do on their own, and then controlling what the PM generated output would need to be in order to get the forces to be as needed for standing. Initially a device designed to help the elderly stand up would be used in a more controlled setting, such as for physical therapy; however as much of the time this would assist mainly those already institutionalized from losing more muscle strength, eventually it would be desirable to create personal portable devices to help people stay functional longer in society and in their homes.

 

Alyssa George

picture of alyssa georgegeorge.45@wright.edu

Advisor: Dr. Tarun Goswami

 

     

 

 

Alyssa earned a B.S. in Biomedical Engineering at Case Western Reserve University in 2008.  She is currently doing research through the Wright State University orthopaedic surgery department at Miami Valley Hospital.  This research studies mechanical failure modes of orthopaedic implants.  Her thesis work will focus on orthopaedic materials in the treatment of bone diseases such as osteoporosis.

 

 

Carissa Johnson

Picture of Carissa Johnsonbrunsman.3@wright.edu

Multi-Modal Adaptive User Interfaces to Support Web Based Information Seeking for the Blind

Advisor: Dr. S. Narayanan

 

 

Carissa received her B.S. in Computer Engineering from Ohio State University, 1993. She worked as a computer consultant and project manager for World Wide Web (WWW) and electronic commerce projects. She returned to college to study human computer interaction at Wright State University (WSU).  Carissa graduated with a M.S in Human Factors Engineering in 2006 and is currently working on her Ph.D. at WSU. Her research focus is on improving the interaction with the WWW for users who are blind by using adaptive and multimodal technology.

 

Research Synopsis:

    This research is investigating the issues involved in users who are blind information seeking on the web as well as researching methods to improve their navigating ability.  Today’s websites use a graphical user interface (GUI) design that is considered a monumental advance in interface design and  is heavily visual in nature.  Visual interaction is required for both inputs and outputs to the system.  The World Wide Web (WWW) has brought the GUI interface to new levels by including streaming media, real time collaboration, interactive documents and pop-up windows to enhance the interactive visual experience.  It has allowed users to quickly learn how to interact with the system by providing symbols and graphics to aid in the navigation.  Though the GUI aids usability for the sighted interacting with a system, it is detrimental to the blind.  To date, the main method for making the web accessible is to encourage designers to use the guidelines developed by the World Wide Web Consortium (W3C) and to use assistive technology such as a screen reader.  The W3C has developed accessibility guidelines (Web Content Accessibility Guidelines (WCAG)) that are developed to make websites accessible at the interface level but they do not address the issues discovered during information seeking.  Several components are required to work together to make the web truly accessible in addition to the designer.

 

    Browsing a website for sighted users tends to be based on trial and error experimenting and opportunistic navigating (Thatcher, 2008).  Users learn to approach websites based on experience with previous websites and form strategies on how to approach every new task based on these experiences.  These strategies and tactics are well researched and documented (Marchionini, 1998, Thatcher, 2008, Spence, 1999).  Researchers have coined their own terms for each strategy but they are very similar in definition.  Machionini (1998) defines these browsing strategies as scanning, navigating, observing and monitoring.  These strategies differ by the specificity of the object defined in the users mind and the specificity of the object defined by the website.  All of these techniques involve using sight to determine the next step and make decisions about location.  There is very limited documentation that describes the strategies users who are blind use to browse a website.  This research is designed to observe participants who are blind conducting various typical search tasks on a website.  These observations will be used to determine the strategies conducted by users who are blind and compare them to sighted users.  Then approaches will be analyzed that may improve the strategies currently used.

The research in this area will consist of the following processes:

  • Document a detailed description of the challenges encountered in information seeking by the blind through interviews and research.
  • Develop approaches using multimodal technologies
  • Investigate model-based adaptive interfaces to integrate multimodal methods through research.
  • Design artifacts embodying integrated methods defined in the previous task.
  • Evaluate artifacts through experimentation.
  • Generate generic principles/methodologies for contributions to the field of user interface design.

Robert Keefer

Picture of Robert Keeferkeefer.2@wright.edu

Mobile Reader for the Visually Impaired

Advisor: Dr. Nikolaos Bourbakis


     

 

Rob earned a B.S. in Mathematics & Computer Science from Lawrence Technological University, and a M.S. in Computer Science from Wright State University.  As a consultant, Rob has worked with start-up companies and Fortune 100 companies to develop software for a wide range of uses - from robots to Web sites. In 2007, Rob returned to Wright State to pursue a PhD in Computer Science. His research is focused on alternative reading devices for the blind.

 

Research Synopsis:

The current state of the art in document image processing has focused on solving problems that libraries and museums encounter in digitizing and preserving documents. Unfortunately, the commercially available systems developed to date are inconvenient for use in a mobile setting. This project will look into document image processing algorithms that will facilitate the assembly of a lightweight, portable, easy-to-use document reader for the visually impaired.

 

Another significant problem with such a device is the command and control by those with visual impairments. Thus, this project will also look into the development of an interactive method for a user to capture a useful document image and interact with the document without the ability to see the document.

 

Common reader systems available today require the user to carry material to a scanner connected to a Personal Computer. Other systems, though portable, are large and require the user to carry a heavy piece of equipment to the library, or where ever it is going to be used. For a mobile reader system to be useful, it must be small and portable, yet not draw attention to the user when in use.

  This project is targeted at creating a system comprised of a small camera, a wireless headset, and a small handheld computer. An ideal system will have the camera sewn into a hat or mounted into a pair of glasses that can be easily directed toward the document to be read. A wireless headset will provide an inconspicuous method for listening to the synthesized speech and issuing voice commands. The primary processing of the system will be performed on a handheld computer that could attach to the user’s belt or placed in a purse.

 

picture of an overview of a  lightweight, portable easy to use document reader The figure below presents an overview of the system: 1) the camera, 2) image processing, 3) text-to-speech, and 4) voice commands.

 

 A primary goal of this research is to discover and develop optimized algorithms that enable the document image processing, OCR, and speech processing components of this system (i.e. the “heavy lifting”) to be performed on a handheld device. To facilitate a positive user experience, a user should begin hearing the text read within 40 seconds of issuing a “Read” command.

While the system as described in this overview is impressive there are many gaps in this researcher’s experience that prevent the complete implementation of such a system. Substantial effort will need to be spent working with each aspect of the system described.

 

Document Image Processing: The document image must be cleaned and rectified before it is sent to an OCR method for further processing. Objective 1: Develop algorithms that will correct geometric distortions in text images implemented on a handheld computer.

 

Text-to-Speech: To enable the reading of the processed image, speech synthesis algorithms will need to be optimized for a handheld device. Objective 2: Develop speech-processing algorithms implemented on a handheld computer.

 

System Optimization: Commercially available readers average 40 seconds to scan and process an image before beginning to read. Objective 3: Optimize algorithms for performance with the goal of processing one page of text in 40 seconds on a handheld computer.

 

 

picture of john kegleyJohn Kegley

View message header detailkegley.4@wright.edu

Blind Participant working on the Web with a JAWS screen reader

Advisor: Dr. Wayne Shebilske

 

 

 

John received his B.S. in Psychology from University of South Florida in 2000. He received his M.S. from Western Kentucky University in 2004. He is currently a Ph. D. student in Human Factors Psychology. He is interested in accessibility/usability issues for complex human-machine systems and specifically investigating how individuals with disabilities use Assistive Technologies to interact with web-enabled software applications. 

 

PWD_EvieResearch Synopsis
     Our previous research was on web pages that started highly compliant with World Wide Web consortium (W3C) standards and that had been designed to be accessible and usable for people with low vision and blindness. Despite starting with such state-of-the-art materials, we identified gaps in the interaction between people, screen readers, screen magnifiers, and web pages. We discovered similar compliance and gaps in Health Information Technology (HIT). Our innovation is to close these gaps with a Usability Proficiency Assessment Tool (UPAT) and UPAT-based training that separates problems related to people from problems related to assistive technology and web pages. These distinctions will enable recommendations for improving HIT tasks. Accordingly, the proposed research will test five hypotheses about how UPAT-based assessment and training will improve access, comprehension, and use of HIT providing a foundation for future research on other eHealth applications.

 

Hypothesis 1: Relative to students without disabilities, students with visual impairments will perform worse on entering electronic medical records before they have UPAT-based assessment and training.


Hypothesis 2: Relative to students without disabilities, students with visual impairments will perform as well on entering electronic medical records after they have UPAT-based assessment and training.


Hypothesis 3: Relative to students without disabilities, students with visual impairments will find and understand worse preventative mental health HIT before having UPAT-based assessment and training.


Hypothesis 4: Relative to students without disabilities, students with visual impairments will find and understand as well preventative mental health HIT after having UPAT-based assessment and training.

Hypothesis 5: Students with or without disabilities will have fewer negative effects and more positive effects of mental health information when they understand the information better.

 

     The significance of testing Hypotheses 1-4 will be a foundation for improving HIT for all people with visual impairments. It will also immediately benefit the people in the study with visual impairments using HIT. Testing Hypothesis 5 will also improve HIT at WSU and provide a foundation for improving other HIT. It will also suggest whether the effect of HIT has a positive, neutral, or negative impact on the intervention it is delivers as assessed by responses to simulated scenarios.

 

     The conceptual framework is derived from dynamic systems theory and from a systematic approach to design. From dynamic systems theory, the goal is to facilitate the processing of time sensitive and complex information. Although typical HIT tasks do not seem time sensitive or complex, visual impairments make them so. Complexities relate to the strategic use of the assistive technologies and to their limitations; time sensitivities relate to timing-out parameters that are set for people without disabilities. Such complexity is what motivates a systematic design process, which is based on formative development guided by repeated designing and testing. The unifying conceptual framework is enabling us to transition ideas back and forth between our research to assist military troops and our research to assist individuals with disabilities. The potential to significantly advance our knowledge or understanding applies to people with or without disabilities and to HIT, which will be moved toward improve access, comprehension, and use for all.

 

James Leonard

picture of james leonardleonard.38@wright.edu

Advisor: Dr. John Flach

 

     

 

 

Jim received his B.S. in Psychology in 2006 from Louisiana State University in Baton Rouge, LA. He is currently working on his M.S. in Human Factors Psychology. The focus of his thesis is brain computer interface devices and their displays and feedback, with regard to an individual with cerebral palsy. Jim is currently conducting case studies with local area adolescents with disabilities and the Cyberlink Brainfingers system. He is also working with Smith Middle School Special Education faculty to help integrate such devices into their classrooms.

 

 Research Synopsis:

Jim’s current research is centered on Brain Computer Interfaces and Experimental Displays. He has been working with several individuals in the area with various BCIs (Brainfingers, Neural Impulse Actuator, etc.). The population he works with often includes those individuals in a Locked-In state or similar condition (severe CP, TBI, etc.). Many individuals have specific feedback needs that must be addressed (for instance blindness rules out visual feedback); these individuals will require alternative interfaces that may or may not be already available. One such interface being developed is an auditory display that allows the user to visualize the position of the mouse cursor on the screen in real time using stereo pan and a specific sound modulation. This display is being created for a partially locked in individual with cerebral palsy and severe visual impairment. As BCIs are not yet a common way of interacting with computers, there is much ground to explore. New spatial and organizational metaphors are needed to accommodate new devices and control schemes, and Jim’s research is at this nexus.

 

 

Katherine Lippa

picture of katherine lippalippa.2@wright.edu

Advisor: Dr. Helen Klein


 

   

 

Katherine received her B.A. in Near Eastern & Judiac Studies & History from Brandeis University, Waltham, MA.  She received her M.S. in Human Factors Psychology from Wright State University, 2006.  Her dissertation work focuses on complex cognition associated with managing disabilities and chronic diseases.  Katherine has worked with several diabetes clinics to help optimize their education programs.

 

Julio Mateo

Picture of Julio Mateomateo.2@wright.edu

Exploring Ways to Improve Gaze-based Human-Computer Interaction

Advisor: Dr. Robert Gilkey

    

 

 

Julio received his B.A. in Psychology from the Universidad Pontificia de Salamanca, Spain, and completed his M.S. in Human Factors Psychology from Wright State University. In his Master’s thesis, he explored the effect of variable feedback delay (e.g., Internet delay) on visual target-acquisition performance (e.g., teleoperation). Since he joined the IGERT program, Julio has completed his practicum at Goodwill/Easter Seals Miami Valley (Technology Resource Center and Vision Services) and has conducted research on gaze interaction in collaboration with the IT University of Copenhaghen. For his Ph.D. dissertation, Julio is exploring human-factors issues associated with mobility aids for blind pedestrians and searching for innovative approaches to improve these mobility aids. Currently, he is also exploring multimodal navigation aids for soldier land navigation in collaboration with the Air Force Research Laboratory.

 

Research Synopsis:

     Using gaze as a computer input holds the potential to enable fast, hands-free computer access to users who cannot operate conventional input devices (e.g., keyboard and mouse). For example, users with physical disabilities that result in limited hand function could benefit from well-designed gaze-based computer systems. In order to realize its potential, gaze input must enable users to perform in a fast and reliable manner point-and-select tasks (e.g., clicking on an icon) that are usually performed with a mouse when interacting with graphical user interfaces. Although gaze is well suited for pointing and can provide a faster (and arguably more natural) pointing method than a mouse, selecting targets using gaze alone is less straightforward.

 

     Researchers at the IT University (ITU) of Copenhagen and I (Mateo et al., 2008) explored the potential of combining gaze input (for pointing) with an electromyographic (EMG) signal from the forehead (for selection). We compared this gaze-EMG input method to the traditional mouse using a target-acquisition task. We found that participants performed the task faster using the hands-free gaze-EMG input method than using the mouse, showing the potential of this input combination. In a later paper (San Agustin et al., submitted), we advocated for the use of gaze input for game interaction using data from the study described above and another study previously conducted by researchers at ITU.

 

     The continuous presence of eye movements (even when a user is staring at one point) results in cursor jitter that makes the selection of small targets difficult when using gaze alone. One way to address this problem is by designing interfaces that increase the effective size of a target, making it easier to select. For example, many gaze-based systems include a two-step magnification tool which, when first activated, magnifies the area where the activation occurred (but does not trigger a selection) and, when activated for a second time, selects the (now bigger) target under the cursor. Although fairly successful at enabling small-target selection, this technique requires the user to make two discrete activations and can be quite slow.

 

     Researchers at ITU and I (Skovsgaard et al., 2008) conducted a couple of studies to explore the potential of a novel zooming interface tool. Using this tool, user activations trigger a gradual increase in size of the objects in the area where the activation occurred (i.e., zooming) and, at the end of the zooming period, the (now bigger) object in the center of the zooming window is selected. As the objects gradually increase in size, the user can adjust the cursor position to hit the continuously growing target. The two-step magnification tool performed better than the zooming interface with very small targets. However, overall the zooming interface enabled faster selection than two-step and, for some targets small enough to need interface aids, the zooming interface showed some speed advantages without sacrificing accuracy. In a later paper (Skovsgaard et al., submitted), we introduced an application that allows gaze-interaction users to easily access and switch among single selection, two-step magnification, or zooming interface.

 

 

References

 

Mateo, J. C., San Agustin, J., & Hansen, J. P. (2008). Gaze beats mouse: Hands-free selection by combining gaze and EMG. CHI Extended Abstracts, Italy, 3039-3044.

San Agustin, J., Mateo, J. C., Hansen, J. P., & Villanueva, A. (submitted). Evaluation of the potential of gaze input for game interaction. PsychNology.

Skovsgaard, H. H. T., Hansen, J. P., & Mateo, J. C. (2008). How can tiny buttons be hit using gaze only? Proceedings of the 4th Conference on Communication by Gaze Interaction – COGAIN 2008: Communication, Environment and Mobility Control by Gaze, Czech Republic, 38-42.

Skovsgaard, H., Mateo, J. C., & Hansen, J. P. (submitted). Hitting those small targets with gaze only. PsychNology.

 

Picture of Holly SlackHolly Slack

slack.2@wright.edu

Advisor: Dr. John Flach

      

 

 

 

Holly received her B.S. in Biological Sciences in 2002 from Wright State University. She is working on completing her M.S. in Biological Sciences as well as pursuing a Ph. D. in Human Factors Psychology. Her dissertation work will focus on the successful management of a disability while pursuing academic and professional goals.

 

Research Synopsis:

     As part of my research, I am keeping a journal that highlights events in my life that I believe are unique to a person with a physical disability. My primary intention when I began this project was to provide others with a glimpse of what daily life is like for a person living with a physical disability. As the project has developed, I am now focusing my attention on the environmental and social barriers that I encounter on almost a daily basis. In doing so, I am attempting to identify strategies that have been successful in overcoming these obstacles. In addition, I am also identifying situations in which current strategies have been ineffective. My hope is to determine the critical variables that contribute to these barriers, so I can then develop new or modified strategies that better address these obstacles. My primary interest is determining what factors contribute to an individual’s successful management of a disability while he/she is pursuing academic and professional goals.

 

Copyright Information © 2005 | Accessibility Information
Last updated Tue. Feb-09-10, 20:57
Please send comments to the WSU Web Team