My Research:
I'm currently researching under Dr. Stacy Branham, my advisor, in the INsite Lab at The University of California - Irvine.
Cuddling Up With a Print-Braille Book: How Intimacy and Access Shape Parents' Reading Practices with Children (CHI 2024)
You can find the publication here.
Understanding the State of User-Provided Image Descriptions on Twitter (2018/2019)
"To make images on Twitter and other social media platforms accessible to screen reader users, image descriptions (alternative text) need to be added that describe the information contained within the image. The lack of alternative text has been an enduring accessibility problem since the “alt” attribute was added in HTML 2.0 over 20 years ago, and the rise of user-generated content has only increased the number of images shared. As of 2016, Twitter provides users the ability to turn on a feature that allows descriptions to be added to images in their tweets, presumably in an effort to combat this accessibility problem. What has remained unknown is whether simply enabling users to provide alternative text has an impact on experienced accessibility. In this paper, we present a study of 1.09 million tweets with images, finding that only 0.1% of those tweets included descriptions. In a separate analysis of the timelines of 94 blind Twitter users, we found that these image tweets included descriptions more often. Even users with the feature turned on only write descriptions for about half of the images they tweet. To better understand why users provide alternative text descriptions (or not), we interviewed 20 Twitter users who have written image descriptions. Users did not remember to add alternative text, did not have time to add it, or did not know what to include when writing the descriptions. Our findings indicate that simply making it possible to provide image descriptions is not enough, and reveal future directions for automated tools that may support users in writing high-quality descriptions. "
You can find the publication here.
Developing a Computer-Vision based tool for Indoor Navigation (2017)
"The idea of using technology to help those with visual impairments navigate has been studied extensively. However, most of these systems focus on getting the user from place to place, rather than helping the person get a better sense and intuition of their environment. Providing blind people with the same intuitive clues that sighted persons have may allow them to better navigate physical spaces, and also feel more empowered to freely explore the physical location. For this purpose, we have begun to study the process that sighted individuals use for familiarizing and getting a sense of their environment. We believe our results will show it is possible to enhance the navigational capabilities of blind people by providing access to the same clues used by sighted people to get a sense of their environment. "
This work was published, though the paper was written early into the summer. The additional work done throughout the summer was awarded third place in the Student Research Competition at ACM ASSETS 2017. Texas A&M Today published an article about my involvement.
The Dynamic-Doubling List (2016/2017)
While doing unrelated research with Dr. Dylan Shell, I had an idea for novel data structure. Its insertion time is constant, but does not require copying elements like a dynamic array. It also has constant retrieval time, albeit slower than that of the traditional dynamic array. I ran benchmark tests in C++ and Python, comparing my data structure to the native list and vector.
I published a paper on the Dynamic-Doubling List in Explorations, an undergraduate research journal overseen by Texas A&M. You can find the paper here.