<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                                                <journal-id>gummfd</journal-id>
            <journal-title-group>
                                                                                    <journal-title>Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi</journal-title>
            </journal-title-group>
                            <issn pub-type="ppub">1300-1884</issn>
                                        <issn pub-type="epub">1304-4915</issn>
                                                                                            <publisher>
                    <publisher-name>Gazi Üniversitesi</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.17341/gazimmfd.1348325</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Computer Vision</subject>
                                                            <subject>Image Processing</subject>
                                                            <subject>Deep Learning</subject>
                                                            <subject>Artificial Intelligence (Other)</subject>
                                                            <subject>Biomedical Imaging</subject>
                                                            <subject>Biomedical Diagnosis</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>Bilgisayar Görüşü</subject>
                                                            <subject>Görüntü İşleme</subject>
                                                            <subject>Derin Öğrenme</subject>
                                                            <subject>Yapay Zeka (Diğer)</subject>
                                                            <subject>Biyomedikal Görüntüleme</subject>
                                                            <subject>Biyomedikal Tanı</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                        <article-title>Beyin tümörü biyopsisi için derin öğrenme tabanlı risk minimizasyonlu otomatik planlama</article-title>
                                                                                                                                                                                                <trans-title-group xml:lang="en">
                                    <trans-title>Deep learning-based automatic planning with risk minimization for brain tumor biopsy</trans-title>
                                </trans-title-group>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0009-0006-1701-4566</contrib-id>
                                                                <name>
                                    <surname>Şahin</surname>
                                    <given-names>Mustafa</given-names>
                                </name>
                                                                    <aff>İnönü Üniversitesi Fen Bilimleri Enstitüsü</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-3390-6285</contrib-id>
                                                                <name>
                                    <surname>Şahin</surname>
                                    <given-names>Emrullah</given-names>
                                </name>
                                                                    <aff>KÜTAHYA DUMLUPINAR ÜNİVERSİTESİ, MÜHENDİSLİK FAKÜLTESİ, YAZILIM MÜHENDİSLİĞİ</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-0311-9838</contrib-id>
                                                                <name>
                                    <surname>Özdemir</surname>
                                    <given-names>Edanur</given-names>
                                </name>
                                                                    <aff>İNÖNÜ ÜNİVERSİTESİ, MÜHENDİSLİK FAKÜLTESİ, BİLGİSAYAR MÜHENDİSLİĞİ BÖLÜMÜ</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0003-1166-8404</contrib-id>
                                                                <name>
                                    <surname>Talu</surname>
                                    <given-names>Fatih</given-names>
                                </name>
                                                                    <aff>İNÖNÜ ÜNİVERSİTESİ, MÜHENDİSLİK FAKÜLTESİ, BİLGİSAYAR MÜHENDİSLİĞİ BÖLÜMÜ</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-7655-0127</contrib-id>
                                                                <name>
                                    <surname>Öztürk</surname>
                                    <given-names>Sait</given-names>
                                </name>
                                                                    <aff>FIRAT ÜNİVERSİTESİ, TIP FAKÜLTESİ</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20240816">
                    <day>08</day>
                    <month>16</month>
                    <year>2024</year>
                </pub-date>
                                        <volume>40</volume>
                                        <issue>1</issue>
                                        <fpage>487</fpage>
                                        <lpage>500</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20230823">
                        <day>08</day>
                        <month>23</month>
                        <year>2023</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20240323">
                        <day>03</day>
                        <month>23</month>
                        <year>2024</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 1986, Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi</copyright-statement>
                    <copyright-year>1986</copyright-year>
                    <copyright-holder>Gazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi</copyright-holder>
                </permissions>
            
                                                                                                <abstract><p>Biyopsi, tümör türünün belirlenmesi ve patolojik teşhisin konulması için kritik bir işlem olarak karşımıza çıkar. Bu süreç, özellikle tümörlü yapıdan parça alınarak gerçekleştirilen planlama ve cerrahi müdahale olmak üzere iki temel aşamayı içerir. Planlama aşamasında, MRI verisi üzerinden hastanın beynindeki anatomik noktaların işaretlemesi yapılır ve bu sürecin ortalama dört saat sürdüğü bilinmektedir. Ancak, manuel işaretlemeyle yapılan bu tür planlamaların doğruluk eksiklikleri, sübjektif varyasyonlar ve zaman alıcılığı, otomatik bir planlama aracının kritik bir ihtiyaç olduğunu göstermektedir. Bu çalışmada, MRI ve MRA verisi üzerinden tam otomatik, son teknoloji derin öğrenme mimarilerini içeren bir biyopsi planlama yöntemi önerilmektedir. Önerilen bu yöntem, biyopsi planlamasını hızlı, tutarlı ve tekrarlanabilir bir şekilde gerçekleştirmeyi amaçlamaktadır. Yöntem dört ana aşamadan oluşmaktadır: 1-) Beyinin üst kabuk bölgesinin çıkarılması, 2-) Tümör tespiti ve hedef noktasının belirlenmesi, 3-) Beyin damar ağacının bölütlenmesi, 4-) Optimum yörünge tespiti için üç aşamanın kombinasyonu ve risk hesaplanması. Bu otomatik yöntem, ITKTubeTK&#039;deki 42 hasta verisiyle doğrulanmıştır. Ayrıca, &quot;3D Slicer&quot; eklentisi olarak hazırlanan bu çalışma, klinikler için ücretsiz bir bilgisayar destekli araç olarak sunulmaktadır. Araştırmanın ilerleyen aşamalarında, risk hesaplamasını daha da geliştirmek amacıyla fMRI verisinin entegrasyonu üzerine çalışılması planlanmaktadır.</p></abstract>
                                                                                                                                    <trans-abstract xml:lang="en">
                            <p>Biopsy emerges as a critical procedure for determining tumor types and establishing pathological diagnoses. This process encompasses two primary stages: planning and surgical intervention. During the planning stage, anatomical points in the patient&#039;s brain are marked based on MRI data, known to take an average of four hours. However, the accuracy deficiencies, subjective variations, and time consumption associated with manual marking reveal the critical need for an automated planning tool. In this study, we propose a biopsy planning method, entirely automated and incorporating cutting-edge deep learning architectures, on MRI and MRA data. The suggested approach aims to execute biopsy planning rapidly, consistently, and repeatably. The method consists of four main stages: 1) Removal of the brain&#039;s upper shell, 2) Tumor detection and target point determination, 3) Segmentation of the brain&#039;s vascular network, and 4) Combination of the three stages and risk calculation for optimal trajectory determination. This automatic method has been validated with 42 patient data in ITKTubeTK. Furthermore, this study, prepared as a &quot;3D Slicer&quot; plugin, is offered as a free computer-assisted tool for clinics. In subsequent phases of the research, integration of fMRI data is planned to further enhance risk calculation.</p></trans-abstract>
                                                            
            
                                                            <kwd-group>
                                                    <kwd>Stereotaktik beyin cerrahisi</kwd>
                                                    <kwd>  otomatik cerrahi yörünge planlama</kwd>
                                                    <kwd>  cerrahi risk azaltma</kwd>
                                                    <kwd>  bilgisayar destekli planlama</kwd>
                                                    <kwd>  derin öğrenme</kwd>
                                            </kwd-group>
                                                        
                                                                            <kwd-group xml:lang="en">
                                                    <kwd>Stereotactic brain surgery</kwd>
                                                    <kwd>  automatic surgical trajectory planning</kwd>
                                                    <kwd>  surgical risk reduction</kwd>
                                                    <kwd>  computer-assisted planning</kwd>
                                                    <kwd>  deep learning</kwd>
                                            </kwd-group>
                                                                                                        <funding-group specific-use="FundRef">
                    <award-group>
                                                    <funding-source>
                                <named-content content-type="funder_name">TÜBİTAK ARDEB</named-content>
                            </funding-source>
                                                                            <award-id>122E495</award-id>
                                            </award-group>
                </funding-group>
                                </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">1.	Herrera E., Stereotactic neurosurgery in children and adolescents, Child’s Nervous System, 15, 256–260, 1999.</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">2.	Mishra S., Hologram the future of medicine – from star wars to clinical imaging, Indian Heart Journal, 69, 566 – 567, 2017.</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">3.	Dlaka D., Chudy D., Jerbić B., Kaštelančić A., Raguž M., Robot-assisted stereotactic and spinal neurosurgery: A review of literature, 2021 44th International Convention on Information, Communication and Electronic Technology, Opatija-Croatia 1185–1190, 15 November 2021.</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">4.	Marcus H.J., Vakharia V. N., Ourselin S., Duncan J., Tisdall M., Aquilina K., Robot-assisted stereotactic brain biopsy: systematic review and bibliometric analysis, Child’s Nervous System, 34, 1299–1309, 2018.</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">5.	Zimmer Biomet, ROSA, https://www.zimmerbiomet.com/en/patients-caregivers/rosa-robotic-technology.html, Erişim tarihi Temmuz 30, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">6.	Amin D. V., Lunsford L. D., Volumetric Resection Using the SurgiScope®: A Quantitative Accuracy Analysis of Robot-Assisted Resection, Stereotactic and Functional Neurosurgery, 82, 250–253, 2005.</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">7.	Renishaw, Neuromate, https://www.renishaw.com.tr/tr/neuromate-stereotactic-robot--10712, Yayın tarihi 2001, Erişim tarihi Ağustos 3, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">8.	Shamir R., Freiman M., Joskowicz L., Shoham M., Zehavi E., Shoshan Y., Robot-assisted image-guided targeting for minimally invasive neurosurgery: Planning, registration, and in-vitro experiment, Medical Image Computing and Computer-Assisted Intervention – MICCAI 2005, Heidelberg-Berlin, 131–138, 2005.</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">9.	Serletis D., Pait T. G., Early craniometric tools as a predecessor to neurosurgical stereotaxis, Journal of Neurosurgery, 124 (6), 1867–1874, 2016.</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">10.	Fomenko A., Serletis D., Robotic stereotaxy in cranial neurosurgery: a qualitative systematic review, Neurosurgery, 83(4), 642–650, 2018.</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">11.	Trope M., Shamir R. R., Joskowicz L., Medress Z., Rosenthal G., Mayer A., Levin N., Bick A., Shoshan Y., The role of automatic computer-aided surgical trajectory planning in improving the expected safety of stereotactic neurosurgery, International journal of computer assisted radiology and surgery, 10, 1127–1140, 2015.</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">12.	Bulut C., Ballı T., Yetkin F.E., Comparative classification performances of filter model feature selection algorithms in EEG based brain computer interface system, Journal of the Faculty of Engineering and Architecture of Gazi University, 38 (4), 2397-2408, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">13.	Faria C., Erlhagen W., Rito M., De Momi E., Ferrigno G., Bicho E., Review of robotic technology for stereotactic neurosurgery, IEEE Rev. Biomed. Eng., 8, 125–137, 2015.</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">14.	Renier C., Targeting inaccuracy caused by mechanical distortion of the leksell stereotactic frame during fixation, J. Appl. Clin. Med. Phys., 20, 27 – 36, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">15.	Lim D. H., Kim S. Y., Na Y. C., Cho J. M., Navigation guided biopsy is as effective as frame-based stereotactic biopsy, Journal of Personalized Medicine, 13, 5, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">16.	Hamzé N., Bilger A., Duriez C., Cotin S., Essert C., Anticipation of brain shift in Deep Brain Stimulation automatic planning, 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan-Italy, 3635-3638, August 25-26, 2015.</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">17.	Das S., Stereotactic biopsy in the diagnosis of small brain lesion, Journal of Bangladesh College of Physicians and Surgeons, 39, 24–35, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">18.	Dundar T. T., Yurtsever I., Pehlivanoglu M. K., Yildiz U., Eker A., Demir M. A., Mutluer A. S., Tektas R., Kazan M. S., Kitis S., Gokoglu A., Dogan I., Duru N., Machine learning-based surgical planning for neurosurgery: Artificial intelligent approaches to the cranium, Frontiers in Surgery, 9, 2022.</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">19.	Yavas G., Caliskan K. E., Cagli M. S., Three-dimensional–printed marker–based augmented reality neuronavigation: a new neuronavigation technique, Neurosurgical Focus, 51 (2), E20, 2021.</mixed-citation>
                    </ref>
                                    <ref id="ref20">
                        <label>20</label>
                        <mixed-citation publication-type="journal">20.	Hu Y., Cai P., Zhang H., Adilijiang A., Peng J., Li Y., Che S., Lan F., Liu C., A comparation between frame-based and robot-assisted in stereotactic biopsy, Frontiers in Neurology, 13, 928070, 2022.</mixed-citation>
                    </ref>
                                    <ref id="ref21">
                        <label>21</label>
                        <mixed-citation publication-type="journal">21.	Marcus H. J., Vakharia V. N., Sparks R., Rodionov R., Kitchen N., McEvoy A. W., Miserocchi A., Thorne L., Ourselin S., Duncan J. S., Computer-assisted versus manual planning for stereotactic brain biopsy: a retrospective comparative pilot study, Operative Neurosurgery, 18 (4), 417, 2020.</mixed-citation>
                    </ref>
                                    <ref id="ref22">
                        <label>22</label>
                        <mixed-citation publication-type="journal">22.	Zanello M., Carron R., Peeters S., Gori P., Roux A., Bloch I., Oppenheim C., Pallud J., Automated neurosurgical stereotactic planning for intraoperative use: a comprehensive review of the literature and perspectives. Neurosurg Rev, 44, 867–888, 2021.</mixed-citation>
                    </ref>
                                    <ref id="ref23">
                        <label>23</label>
                        <mixed-citation publication-type="journal">23.	CASILab at the University of North Carolina-C. Itktubetk-bullitt-healthy mr database. Kitware Data. https://data.kitware.com/#collection/591086ee8d777f16d01e0724/folder/58a372fa8d777f0721a64dfb. Erişim tarihi Ağustos 3, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref24">
                        <label>24</label>
                        <mixed-citation publication-type="journal">24.	Isensee F., Schell M., Pflueger I., Brugnara G., Bonekamp D., Neuberger U., Wick A., Schlemmer H. P., Heiland S., Wick W., Bendszus M., Maier-Hein K. H., Kickingereder P., Automated brain extraction of multisequence mri using artificial neural networks, Human Brain Mapping, 40 (17), 4952–4964, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref25">
                        <label>25</label>
                        <mixed-citation publication-type="journal">25.	Fedorov A., Beichel R., Kalpathy-Cramer J., Finet J., Fillion-Robin J.-C., Pujol S., Bauer C., Jennings D., Fennessy F., Sonka M., Buatti J., Aylward S., Miller J. V., Pieper S., Kikinis R., 3d slicer as an image computing platform for the quantitative imaging network, Magnetic Resonance Imaging, 30 (9), 1323–1341, 2012.</mixed-citation>
                    </ref>
                                    <ref id="ref26">
                        <label>26</label>
                        <mixed-citation publication-type="journal">26.	The Trustees of the University of Pennsylvania. The brain tumor segmentation (brats) challenges. https://www.med.upenn.edu/cbica/brats/. Erişim tarihi Şubat 13, 2022.</mixed-citation>
                    </ref>
                                    <ref id="ref27">
                        <label>27</label>
                        <mixed-citation publication-type="journal">27.	Hatamizadeh A., Nath V., Tang Y., Yang D., Roth H., Xu D., Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images, 2022.</mixed-citation>
                    </ref>
                                    <ref id="ref28">
                        <label>28</label>
                        <mixed-citation publication-type="journal">28.	Hatamizadeh A., Tang Y., Nath V., Yang D., Myronenko A., Landman B., Roth H. R., Xu D., Unetr: Transformers for 3d medical image segmentation, Proceedings of the IEEE/CVF winter conference on applications of computer vision, 574–584, 2022.</mixed-citation>
                    </ref>
                                    <ref id="ref29">
                        <label>29</label>
                        <mixed-citation publication-type="journal">29.	Ronneberger O., Fischer P., Brox T., U-net: Convolutional networks for biomedical image segmentation, Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich-Germany, 18, 234–241, October 5-9, 2015.</mixed-citation>
                    </ref>
                                    <ref id="ref30">
                        <label>30</label>
                        <mixed-citation publication-type="journal">30.	Vaswani A., Shazeer N., Parmar N., Uszkoreit J., Jones L., Gomez A. N., Kaiser L., Polosukhin I., Attention is all you need, Advances in neural information processing systems, 30, 2017.</mixed-citation>
                    </ref>
                                    <ref id="ref31">
                        <label>31</label>
                        <mixed-citation publication-type="journal">31.	Dosovitskiy A., Beyer L., Kolesnikov A., Weissenborn D., Zhai X., Unterthiner T., Dehghani M., Minderer M., Heigold G., Gelly S., et al., An image is worth 16x16 words: Transformers for image recognition at scale, 2020.
32.	“vtkobbtree class reference.” https://vtk.org/doc/nightly/html/classvtkOBBTree.html. Erişim tarihi Ocak 13, 2023.</mixed-citation>
                    </ref>
                                    <ref id="ref32">
                        <label>32</label>
                        <mixed-citation publication-type="journal">33.	Sorensen, T., A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons. Biologiske skrifter, 5, 1-34, 1948.</mixed-citation>
                    </ref>
                                    <ref id="ref33">
                        <label>33</label>
                        <mixed-citation publication-type="journal">34.	Willmott, C. J., &amp; Matsuura, K., Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Climate research, 30 (1), 79-82, 2005.</mixed-citation>
                    </ref>
                                    <ref id="ref34">
                        <label>34</label>
                        <mixed-citation publication-type="journal">35.	Federer, H., Curvature measures. Transactions of the American Mathematical Society, 93 (3), 418-491, 1959.</mixed-citation>
                    </ref>
                                    <ref id="ref35">
                        <label>35</label>
                        <mixed-citation publication-type="journal">36.	Wang, Z., Bovik, A. C., Sheikh, H. R., &amp; Simoncelli, E. P, Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13 (4), 600-612, 2004.</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
