Get in touch
Fill out the form and I’ll respond within 24 hours.
kosurusai646@gmail.com
Phone
+91 9515457049
Location
India
A timeline highlighting my education, hackathon achievements, and technical growth.

Bachelor of Technology – Data Science
Manipal Academy of Higher Education, Udupi
· Expected Graduation: July 2027
Specialization
Artificial Intelligence, Machine Learning, and Data Science
Technical Skills
Relevant Coursework

Developed an interactive mobile application in collaboration with Manipal Academy of Higher Education (MAHE) and KMC Hospital, Udupi, aimed at connecting cancer patients with their assigned doctors. The application facilitates both pre-operative and post-operative care, allowing patients to track their treatment and communicate efficiently with medical staff.
Tools & Technologies: Flutter (Frontend), Firebase Authentication & Firestore (Backend), Figma (UI/UX Prototyping), Flutter Packages – Provider, Image Picker, Local Notifications.
Achievements: Successfully delivered a fully functional and interactive application to KMC Hospital, enabling seamless communication and care management for cancer patients. The project won a cash prize of ₹20,000.
GitHub: Cancer Gateway App


This is a mental health AI agent designed to provide support to individuals in need. It is built using LangChain for agent orchestration and prompt engineering, FastAPI to expose the backend endpoints, Twilio for emergency contact functionality, and Ollama to run models locally on the device. The current models in use are alibayram/medgemma:4b and Qwen2.5:7B. Future plans include fine-tuning the models, developing a frontend, and adding new features based on user feedback collected through the landing page. The web application will be launched after securing a suitable investor.
View Project
this is the landing page for my Mental Health Therapy AI Agent project. It provides information about the project, its features, and how to get involved or support the development.
Explore ConceptThis system implements a decoder-only Transformer language model inspired by Attention Is All You Need. The architecture consists of token embeddings, learned positional embeddings, and a stack of causal self-attention Transformer blocks. Each block contains multi-head masked self-attention, a position-wise feed-forward network, residual connections, and Layer Normalization. Causal masking is enforced using a lower-triangular attention mask to prevent information leakage from future tokens. The attention mechanism follows scaled dot-product attention, where queries, keys, and values are linearly projected from the embedding space. Multiple attention heads operate in parallel and are concatenated before a projection back to the model dimension. The output is passed through a linear language modeling head and optimized using cross-entropy loss for next-token prediction. The model is trained autoregressively on character-level input sequences, deviating from the original paper which uses subword tokenization. Positional encodings are learned embeddings rather than sinusoidal. The architecture omits an encoder stack, cross-attention layers, label smoothing, and learning rate warmup, making it simpler than the original Transformer. Dropout is configurable but currently disabled. Input and Output Input: A sequence of character indices with a fixed context window (block_size). Output: Logits over the character vocabulary and generated text via probabilistic sampling. This implementation prioritizes conceptual clarity and local inference over scale and optimization fidelity.
View ProjectFill out the form and I’ll respond within 24 hours.
kosurusai646@gmail.com
Phone
+91 9515457049
Location
India