THESIS
Utilizing large language models for improved at-home stroke care and rehabilitation.
WEEK 1
Hello and Welcome!
This is my thesis page. Im going to use this space as an area where I explore my mental meanders, development of my tool and document the development of my prototypes.
Im so excited to take you along with me on this journey.
Let’s get straight into it, during the 2020 covid lockdown, I learnt that stroke was the 5th cause of mortality in Nepal. By 2023, it had moved up to the 3rd reason, according to the WHO.
Having parents who worked tirelessly to hold up rickety health systems in Nepal, I noticed many specialty-high resourced centers being established and creating lots of knowledge and work around thrombolyzing the blood clot that has caused the stroke, and the immediate care in the hospital that came after that. However, pain lingered, and sometimes patients would be rendered without much help for rehabilitation, especially after receiving care and returning to their hometowns in rural Nepal.
While in conversation with a neurologist, during pre thesis last semester, currently practicing in the UK, with roots in Nepal, they suggested that I should look into AI leveraging for stroke recovery and rehabilitation.
I wanted to explore ai for rehabilitation, despite understanding its effects on a population that has suffered because of Colonization.
WEEK 2
Ok, week 2 here we go. Lots of feedback after the first week and introducing our projects in class.
I spoke a little bit about my pre thesis, and how it will help lead me to my prototype. If you would like to flip thru my slides, find them here.
Anyways, our task for the week was to create our first prototype. I shared my prototype… which conveniently broke down, and I had to share a video of the working prototype instead. Classic tech stuff.
Anyways, here is a working video of my prototype.
I received a bunch of feedback from my advisors for the thesis project.
Going in I was really keen on creating an experience that honored disability and created a safe space for people with all sorts of disabilities that go beyond stroke.
After creating this prototype, my advisors suggested to focus on 1 part of the extremely long and strenuous rehabilitation journey. I wanted to focus on a simple, universal method that is used by speech therapists and physiotherapists.
WEEK 3
So, after another round of conversations with the speech therapist, I created a new prototype, focussing on 1 part of the rehabilitative journey.
I decided to stick to counting from 1 to 10, and keep the prototype super simple:
Creating that was super fun, I vibe coded that prototype and with the help of the speech therapist, and a couple other more rounds of testing and continued conversation, created the working prototype.
I was excited showing my prototype to my advisors. I took it one step further and added more refinement and research with a creative technologist, Louise Lessel. I showed her working prototype of my working prototypes, of both the physical therapy device, as well as the audio visual prototype. She taught me about shaders, and how I could create a high-tech experience for rehabilitation at a hospital, and then create a lo-fi low code version that reflects the same design elements into a phone, at-home for rural usage in Nepal. She did suggest that using AI in a low-resource context, for rural Nepalese people would be difficult to set up, both in terms of digital literacy, as well as operationalizing.
I also visited Torin Blankensmith, another creative technologist, who helped me decrease the confidence of my immersive experience, which heleped the computer read hands better, as they can come in all sorts of shapes and sizes. He told me about his work at Mt. Sinai which he created with his partner, an immersive experience.
WEEK 4
Louise Lessel also connected me to Torin’s partner, whom I will be visiting soon, Mischa who will help me demo their rehabilitative space. I will take inspiration from this experience and Ive already made a moodboard to have more clarity in what I want my experience to look and feel like:
Then I went on three gif, as advised by Louise, and found the same renders for my prototype that reflects my big hospital prototype as well.
Here is the working link of the prototype
Anyways, I reflected on the importance of grounding my AI prototype in clinical reality rather than purely speculative interaction design. I showed the concept to a speech therapist and am currently waiting to hear back. I realized that imagining how a therapist would physically sit beside a patient while interacting with the system changes everything — this is not a standalone tech product, but a co-therapy tool.
The physiotherapists provided unexpected but incredibly useful phonetic insights. They noted that words with consonants are comparatively easier than vowel-heavy words. In number articulation, 1, 2, 4, and 10 are easier. However, 3, 5, 6, 7, 8, and 9 are more difficult — with 7 and 8 being the hardest.
For vowels:
“A” and “O” are easier.
“I” and “E” are harder.
This shifted my thinking. I initially assumed vowels would be simpler, but that assumption came from a non-clinical perspective. The difficulty is not intuitive — it is embodied.
I am now thinking about how the AI should communicate this to participants. Should they know that certain sounds are neurologically harder? Would that reduce frustration? Or would it feel discouraging?
Another tension I’m grappling with is aesthetic direction. Hospital interfaces feel cold and clinical. Children’s speech apps feel gamified and overly cheerful. My audience who happen to be adult stroke survivors in Nepal, exists in neither category. The interface must feel dignified, calm, and culturally grounded, without feeling infantilizing or sterile.
This feedback is pushing me to rethink the role of AI — not as a flashy innovation, but as a quiet, adaptive companion embedded within a therapist’s care ecosystem.
Existing products in the market exist for B2B solutions, availability to clinicians, and whatever little s existing, are not very user friendly and look visually dated. Theres many tools for paediatric use, however, it is more for cognition streamlining and games, rather than for rehabilitation.
Im excited to test my prototype with patients in Nepal. The next step is to make the camera understand the mouth shapes through face recognition and ai, so the tool is able to give feedback on the shape of the mouth as well.