  {"id":10489,"date":"2023-08-01T09:21:30","date_gmt":"2023-08-01T14:21:30","guid":{"rendered":"https:\/\/www.vanderbilt.edu\/vise\/?p=10489"},"modified":"2023-08-04T11:50:07","modified_gmt":"2023-08-04T16:50:07","slug":"vise-summer-research-in-progress-rips-8-10-23","status":"publish","type":"post","link":"https:\/\/www.vanderbilt.edu\/vise\/vise-summer-research-in-progress-rips-8-10-23\/","title":{"rendered":"VISE Summer Research In Progress (RiPs) 8.10.23"},"content":{"rendered":"<p>VISE Summer Seminar to be led by<\/p>\n<p><b>Jumanh Atoum (CS)<\/b><\/p>\n<p><img loading=\"lazy\" class=\"alignleft wp-image-10061 size-thumbnail\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/01\/19203116\/Jumanh-Atoum-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/01\/19203116\/Jumanh-Atoum-150x150.jpg 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/01\/19203116\/Jumanh-Atoum-80x80.jpg 80w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/01\/19203116\/Jumanh-Atoum-190x190.jpg 190w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/01\/19203116\/Jumanh-Atoum-382x380.jpg 382w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong style=\"font-size: 1.2rem\"><br \/>\nand<\/strong><\/p>\n<p><strong>Xing Yao (CS)<\/strong><br \/>\n<img loading=\"lazy\" class=\"alignleft wp-image-9454 size-thumbnail\" src=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2022\/03\/19203346\/Xing-Yao-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" srcset=\"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2022\/03\/19203346\/Xing-Yao-150x150.jpg 150w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2022\/03\/19203346\/Xing-Yao-300x300.jpg 300w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2022\/03\/19203346\/Xing-Yao-80x80.jpg 80w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2022\/03\/19203346\/Xing-Yao-100x100.jpg 100w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2022\/03\/19203346\/Xing-Yao-190x190.jpg 190w, https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2022\/03\/19203346\/Xing-Yao.jpg 600w\" sizes=\"(max-width: 150px) 100vw, 150px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p><strong>Date<\/strong><span style=\"font-size: 1.2rem\"><strong>:<\/strong> Thursday, August 10, 2023<br \/>\n<\/span><strong style=\"font-size: 1.2rem\">Time:<\/strong><span style=\"font-size: 1.2rem\"> 11:45 am for lunch, noon start<br \/>\n<\/span><strong>Location:<\/strong> Stevenson Center 532<\/p>\n<p><strong>RiP Speaker #1:<\/strong><br \/>\nJumanh Atoum, Computer Science Department<br \/>\n<strong>RiP Title #1:<br \/>\n<\/strong><\/p>\n<p>\u201cAsk me! I am the trainee.\u201d Investigating Limitations of Augmented Reality-based Guidance in a Surgical Training Environment and User-Centered Improvements.<\/p>\n<p><strong>Abstract #1:<br \/>\n<\/strong><span class=\"normaltextrun\">By superimposing computer-generated images on a user\u2019s view of the physical world, Augmented Reality (AR) has revolutionized the way we do things. The usage of AR-based applications is being explored in the surgical field for numerous purposes, such as improving the training process. Upon investigating expert and trainee surgeons\u2019 behavior in phantom procedures, eye-gaze patterns stand out as a key indicator of skill. In this light, we aim to use this difference to improve trainees\u2019 skill acquisition. We do that through conducting co-design focus groups to investigate the design ideas and features surgeons would need for a gaze-guided training experience. Our co-design user study consists of three parts, firstly we aim at building an understanding of the current training environment by conducting a semi-structured interview. Secondly, we conduct a co-design session to encourage the trainee surgeons to build their own features and designs to incorporate them in later user studies. We perform qualitative thematic analysis on the data generated from the interviews and the co-designs. Our preliminary results show many improvements can be made to the current surgical training environment. The results pointed to the importance of visual feedback and the necessity for proper deployment. We then looked into the effect of visual guidance on trainee surgeons\u2019 performance. Thirdly, based on the results from the qualitative thematic analysis, we present an AR-based gaze-sharing application on Microsoft HoloLens 2 headset. This application can help attending surgeons indicate specific regions, communicate with decreased verbal effort, and guide residents throughout an operation. We tested the utility of the application with a user study of endoscopic kidney stone localization completed by urology attending and resident surgeons. The trainee surgeons were asked to fill in the NASA test load index survey at the end of every task. We observe improvement in the NASA test load index surveys (up to 25.71%), in the success rate of the task (6.9% increase in localized stone percentage), in completion time (5.37% decrease), and in gaze analyses (up to 27.93%).\u202f<\/span><span class=\"eop\"><br \/>\n<\/span><strong>Bio #1:<br \/>\n<\/strong>Jumanh is a Ph.D. student in computer science. She is interested in surgical robotics, gesture recognition in robotic surgery, and human-computer interaction. Currently, she is working on eye gaze tracking and sharing between expert and novice surgeons and multi-modal-based gesture estimation to improve surgical training efficacy.<\/p>\n<p><strong>RiP Speaker #2:<\/strong><br \/>\nXing Yao, Computer Science Department<br \/>\n<strong>RiP Title #2:<\/strong><br \/>\nPLEASE: pay less effort to achieve coarse-to-fine segmentation on low quality ultrasound images<br \/>\n<strong>Abstract #2:<\/strong><br \/>\nDeep convolutional neural networks (DNNs) are powerful tools for medical image segmentation, but they typically require time-intensive pixel-level annotations. To address this, bounding-box-based coarse-to-fine segmentation approaches have been explored. The Segment Anything Model (SAM) has emerged as a strong model for generating fine-grade segmentation masks using sparse prompts such as bounding boxes, but it requires improvement for medical image segmentation tasks. In this study, we present an advanced test-phase prompt augmentation method that combines multi-box prompt augmentation and aleatoric uncertainty thresholding. Our method is designed to enhance SAM&#8217;s performance on low-contrast, low-resolution, and noisy ultrasound images, without additional training or fine-tuning. The approach is assessed on three ultrasound image segmentation tasks. Our results suggest that our method dramatically improves the SAM performance, with notable robustness to changes in the prompt.<br \/>\n<b>Bio #2:<br \/>\n<\/b>Xing Yao obtained his Bachelor&#8217;s and Master&#8217;s degrees in Biomedical Engineering and is currently an upcoming third-year Computer Science PhD student at MedICL. Under the advisement of Dr. Ipek Oguz, his research focuses on medical image analysis and machine learning.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>VISE Summer Seminar to be led by Jumanh Atoum (CS) &nbsp; &nbsp; &nbsp; and Xing Yao (CS) &nbsp; &nbsp; &nbsp; Date: Thursday, August 10, 2023 Time: 11:45 am for lunch, noon start Location: Stevenson Center 532 RiP Speaker #1: Jumanh Atoum, Computer Science Department RiP Title #1: \u201cAsk me! I am the trainee.\u201d Investigating Limitations&#8230;<\/p>\n","protected":false},"author":670,"featured_media":10510,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true,"_links_to":"","_links_to_target":""},"categories":[12],"tags":[41,32,335,231,64,72,31,668],"acf":[],"jetpack_featured_media_url":"https:\/\/cdn.vanderbilt.edu\/vu-URL\/wp-content\/uploads\/sites\/193\/2023\/08\/03092716\/VISE_8-10-2023.jpg","jetpack_publicize_connections":[],"jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p98pzF-2Jb","_links":{"self":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts\/10489"}],"collection":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/users\/670"}],"replies":[{"embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/comments?post=10489"}],"version-history":[{"count":5,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts\/10489\/revisions"}],"predecessor-version":[{"id":10523,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/posts\/10489\/revisions\/10523"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/media\/10510"}],"wp:attachment":[{"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/media?parent=10489"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/categories?post=10489"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.vanderbilt.edu\/vise\/wp-json\/wp\/v2\/tags?post=10489"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}