{"data":{"featured":{"edges":[{"node":{"frontmatter":{"title":"AI-Powered Video Content Analyzer","cover":{"childImageSharp":{"gatsbyImageData":{"layout":"constrained","placeholder":{"fallback":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAMCAYAAABiDJ37AAAACXBIWXMAABYlAAAWJQFJUiTwAAABQElEQVQoz41S21LDIBDNs1KugRCatJnGmlHr9dH//6/jLA2UxtTx4Qwse/bsjWojHQhMNeDGg3EHcxwQvp9g38YI935BtD8foPZd5G5UE8GkA9cNKm1aDMMj+sMEFwZw7TNpo89gxT1D/UYUVHaLw3TCOJ0g64B7YXPGBHqnROVbQuruTlhw1aAiMlXJpVtk83MbHtJuIUy76iMxJR1e9kcIG1AR2VAFM2EtiMToTH6yyZeScOXgXQdhPCpWVHZL8GquC18aT265FLolqF0X5/iXYF4KUw5CtqhFD6t2qGUfYVQPJbawao/WjWjMAC27GCh1gBF9PGOSeUGXCldaFtRqUWG5FEm8RYVZMBrCgq1sOQlSu/8WJGLtdzmoFGZyFij+YXwv5nZGIajqgOn5A0+vXwj9ePWxlx+8tNfuJPgDmlYmMNpuZZUAAAAASUVORK5CYII="},"images":{"fallback":{"src":"/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/85391/timeline.png","srcSet":"/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/cebcc/timeline.png 175w,\n/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/b3b96/timeline.png 350w,\n/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/85391/timeline.png 700w,\n/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/88fa4/timeline.png 1400w","sizes":"(min-width: 700px) 700px, 100vw"},"sources":[{"srcSet":"/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/9aa63/timeline.avif 175w,\n/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/f847f/timeline.avif 350w,\n/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/d6d4f/timeline.avif 700w,\n/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/04d38/timeline.avif 1400w","type":"image/avif","sizes":"(min-width: 700px) 700px, 100vw"},{"srcSet":"/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/240e7/timeline.webp 175w,\n/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/5f909/timeline.webp 350w,\n/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/4d9a8/timeline.webp 700w,\n/Portfolio/static/0cbcfec163de066a3bfd1fb366db7fe4/bcfc5/timeline.webp 1400w","type":"image/webp","sizes":"(min-width: 700px) 700px, 100vw"}]},"width":700,"height":435}}},"tech":["Computer Vision","NLP","OpenCV","PyTorch","Python"],"github":"https://github.com/sej07/VideoContentAnalyzer","external":"https://huggingface.co/spaces/Sej7/Video-Content-Analyzer","cta":null},"html":"<p>Engineered a multimodal video analysis pipeline integrating YOLOv8 + BoTSORT object tracking, OpenAI Whisper\nspeech transcription, and CLIP scene understanding to automate semantic extraction across all three modalities\nsimultaneously</p>\n<p>Achieved 6x faster than real-time processing on CPU-only inference to analyze a 6-minute video in under 62 seconds with\npeak memory footprint under 1.7GB enabling deployment on memory-constrained environments</p>\n<p>Architected a FastAPI backend with async background job processing and RESTful endpoints and containerized via\nDocker supporting concurrent video uploads with real-time job status tracking and sub-500ms API response times</p>"}},{"node":{"frontmatter":{"title":"Research Paper Classifier","cover":{"childImageSharp":{"gatsbyImageData":{"layout":"constrained","placeholder":{"fallback":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAALCAYAAAB/Ca1DAAAACXBIWXMAABYlAAAWJQFJUiTwAAAB7ElEQVQoz2VS2W7cMAz0bta3LV+SLFvysbb3yKZtUqBoiwL9/8+agnQQIOgDddDUcDhjT7c9tLEoaw3V9pC6R6MM71EiOOK04D0vGq5L8gphnCMTNZKs3Pe8wm19wAujDF3rcF6ucOOCcd4wny98npcLOjvxgyDKIFsLN17QuxWNdqi1gzIj7LghySp4Bx8eLXFaws03FHULUSqkeY1TkCCIc85RcSoaTPMd+mVF+XNCrTuuS/OKmRPjw1MIzzsGEIXEt7df6MYLtD2jVBaVcpDdzHeKfrpy2PWGfrtiPG94fnzlyYZp4QkYkBbSZ9meuRt9oIizEqaf0PYTGm2RiZ1FLhpIZWH6EcpYzpHGNNEnQDLBD1MuIEDKVY1hE+hO37hRWkCUElnRsCSs2zFgsOMp2gGpkEwg8f0o27sSQzuhkuY9J7iOmNaqYwKk3ZMfMzsCY4Z0oISbVvRuRhDlzIaYdHaG7gbU0kBUihulomZAcrmUPQNSfAAywzCFVIY7EtjeNeWRyOG8kCwBsaQ7jSvbAZV2/wPSwQ8STNMC3Voehf49bRwzJhlIo4+H7yM27YDGzAhigVOw5z8AqfDx5Q2///zF6/cfWLYblu2O6/2FwcmEolIcZNIpTLEMDq/XFYMsIRsNPylxOAb4B4bMB8Ielg15AAAAAElFTkSuQmCC"},"images":{"fallback":{"src":"/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/1dc65/demo.png","srcSet":"/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/9a130/demo.png 175w,\n/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/47c72/demo.png 350w,\n/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/1dc65/demo.png 700w,\n/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/4aa24/demo.png 1400w","sizes":"(min-width: 700px) 700px, 100vw"},"sources":[{"srcSet":"/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/dae43/demo.avif 175w,\n/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/69c10/demo.avif 350w,\n/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/ebe22/demo.avif 700w,\n/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/e0c37/demo.avif 1400w","type":"image/avif","sizes":"(min-width: 700px) 700px, 100vw"},{"srcSet":"/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/5d873/demo.webp 175w,\n/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/853c6/demo.webp 350w,\n/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/978b5/demo.webp 700w,\n/Portfolio/static/9b5b6355cdca4c0df449fa7446d4d691/1b4eb/demo.webp 1400w","type":"image/webp","sizes":"(min-width: 700px) 700px, 100vw"}]},"width":700,"height":392}}},"tech":["PyTorch","BERT","Transformers","Python"],"github":"https://github.com/sej07/Research-Paper-Classification-Fine-Tuning-BERT-","external":null,"cta":null},"html":"<p>Fine-tuned BERT-base (109M parameters) on 28K arXiv abstracts for 11 category classification to achieve 80.3% test\naccuracy(8.8x above random baselines) with 9 of 11 categories exceeding 80% F1</p>\n<p>Conducted systematic error analysis via confusion matrix identifying semantic overlap between interdisciplinary\ncategories, documenting category-specific failure modes to guide future dataset refinement</p>\n<p>Optimized training for Apple Silicon MPS with gradient clipping, linear warmup scheduling, and AdamW, achieving\nstable convergence in a single epoch across 3,549 batches on a resource-constrained environment</p>"}},{"node":{"frontmatter":{"title":"VentSpace","cover":{"childImageSharp":{"gatsbyImageData":{"layout":"constrained","placeholder":{"fallback":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAALCAYAAAB/Ca1DAAAACXBIWXMAABYlAAAWJQFJUiTwAAAB40lEQVQoz12S227bMBBElVj3CyVRIqm75Ch2HcMJkKZ9af//u6aYNdIifRiSWi0Ph8v1jB3QDzPqxqGsDeJUIc0rZEWNJCtFuaqR5lwrqKqFbjv5nystc1E2yJTG9XyDF4QpOjti3y8Y56MkfUKTtJR1XtSI4hyNHbHtL1i2M2y/oZ92mG7BuJ6Q5jW8hwAehzgrMR8vKLVDURlkhYYfZgiiHGVtkRYaqdJY1gvc9Rnq54LSOdTaIckrMRHGOR4OETzvMRT6y+sPrM9XdNMObWdUZkLbb7DDUeSmHdN2wXj8hmE/QdVGDiOIevRjkQDpcH06Sz0+E1g74yaU2qLSVmYCGG/a/l5L04P7KboTIBcEGDciSgoEUYYwLqR2ddNBt73MPIx1ZZzuPh/lEMQ4BKnA5Moc/ChDP20wbpAkguP07pAuGCOMhxF4d+zQdguCOMchSL4CmcgXbu0dyMfgRtNNcMMC1y+w3SxOeRBdq9qi7VeEiYL/P9APU8zrDttN8oPfdEmHBBHMa7LfGK+0E6AZnxAkpdSQLr845JXZsNxAJVklzihxVBkR41Xj4NyMt9cP3N7e0WgHz/P/PQqBrp+lNlGqRNLQSournFLN31qyL4dxxO9f7/j+cYObTziE9z78A+A6BdIBXSDrAAAAAElFTkSuQmCC"},"images":{"fallback":{"src":"/Portfolio/static/72c037eb74caa885340c863047df0cc7/1154b/demo.png","srcSet":"/Portfolio/static/72c037eb74caa885340c863047df0cc7/a8dfe/demo.png 175w,\n/Portfolio/static/72c037eb74caa885340c863047df0cc7/978be/demo.png 350w,\n/Portfolio/static/72c037eb74caa885340c863047df0cc7/1154b/demo.png 700w,\n/Portfolio/static/72c037eb74caa885340c863047df0cc7/0d8c6/demo.png 1400w","sizes":"(min-width: 700px) 700px, 100vw"},"sources":[{"srcSet":"/Portfolio/static/72c037eb74caa885340c863047df0cc7/d6c11/demo.avif 175w,\n/Portfolio/static/72c037eb74caa885340c863047df0cc7/eaf78/demo.avif 350w,\n/Portfolio/static/72c037eb74caa885340c863047df0cc7/f4b65/demo.avif 700w,\n/Portfolio/static/72c037eb74caa885340c863047df0cc7/10f28/demo.avif 1400w","type":"image/avif","sizes":"(min-width: 700px) 700px, 100vw"},{"srcSet":"/Portfolio/static/72c037eb74caa885340c863047df0cc7/307e4/demo.webp 175w,\n/Portfolio/static/72c037eb74caa885340c863047df0cc7/30ec3/demo.webp 350w,\n/Portfolio/static/72c037eb74caa885340c863047df0cc7/43462/demo.webp 700w,\n/Portfolio/static/72c037eb74caa885340c863047df0cc7/6515e/demo.webp 1400w","type":"image/webp","sizes":"(min-width: 700px) 700px, 100vw"}]},"width":700,"height":396}}},"tech":["Python","OpenAI Whisper","REST API","Multimodal"],"github":null,"external":"https://youtu.be/qJl6RKurQww","cta":"https://github.com/sej07/VentSpace"},"html":"<p>Engineered audio emotion pipeline training SVM on 35 librosa features across 1,056 labeled samples from RAVDESS and\nCREMA-D achieving 84% accuracy with 0.87 precision on frustration class</p>\n<p>Integrated OpenAI Whisper transcription with cycle-aware context to deliver personalized weekly mental health reports via a multimodal fusion pipeline</p>\n<p>Designed a public REST API with structured JSON output across 5 fields enabling clean multimodal fusion across audio,\ntext, and cycle-aware pipelines</p>"}}]}}}