<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                    <journal-id></journal-id>
            <journal-title-group>
                                                                                    <journal-title>Balkan Journal of Electrical and Computer Engineering</journal-title>
            </journal-title-group>
                            <issn pub-type="ppub">2147-284X</issn>
                                        <issn pub-type="epub">2147-284X</issn>
                                                                                            <publisher>
                    <publisher-name>MUSA YILMAZ</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.17694/bajece.714293</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Artificial Intelligence</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>Yapay Zeka</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                                                            <article-title>Single-Image Super-Resolution Analysis in  DCT Spectral Domain</article-title>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0002-9304-0647</contrib-id>
                                                                <name>
                                    <surname>Aydın</surname>
                                    <given-names>Onur</given-names>
                                </name>
                                                                    <aff>IHSAN DOGRAMACI BILKENT UNIVERSITY</aff>
                                                            </contrib>
                                                    <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0003-0962-7101</contrib-id>
                                                                <name>
                                    <surname>Cinbiş</surname>
                                    <given-names>Ramazan Gökberk</given-names>
                                </name>
                                                                    <aff>ORTA DOĞU TEKNİK ÜNİVERSİTESİ</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20200730">
                    <day>07</day>
                    <month>30</month>
                    <year>2020</year>
                </pub-date>
                                        <volume>8</volume>
                                        <issue>3</issue>
                                        <fpage>209</fpage>
                                        <lpage>217</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20200403">
                        <day>04</day>
                        <month>03</month>
                        <year>2020</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20200714">
                        <day>07</day>
                        <month>14</month>
                        <year>2020</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 2013, Balkan Journal of Electrical and Computer Engineering</copyright-statement>
                    <copyright-year>2013</copyright-year>
                    <copyright-holder>Balkan Journal of Electrical and Computer Engineering</copyright-holder>
                </permissions>
            
                                                                                                                        <abstract><p>Advances in deep learning techniques have lead to drastic changes in contemporary methods used for a variety of computer vision problems. Single-image super-resolution is one of these problems that has been significantly and positively influenced by these trends. The mainstream state-of-the-art methods for super-resolution learn a non-linear mapping from low-resolution images to high-resolution images in the spatial domain, parameterized through convolution and transposed-convolution layers. In this paper, we explore the use of spectral representations for deep learning based super-resolution. More specifically, we propose an approach that operates in the space of discrete cosine transform based spectral representations. Additionally, to reduce the artifacts resulting from spectral processing, we propose to use a noise reduction network as a post-processing step. Notably, our approach allows using a universal super-resolution model for a range of scaling factors. We evaluate our approach in detail through quantitative and qualitative results.</p></abstract>
                                                            
            
                                                                                        <kwd-group>
                                                    <kwd>deep learning</kwd>
                                                    <kwd>  Super resolution</kwd>
                                                    <kwd>  image process</kwd>
                                            </kwd-group>
                            
                                                                                                                                                    </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">R. Timofte, V. De Smet, and L. Van Gool, “A+: Adjusted anchored neighborhood regression for fast super-resolution,” in Asian Conference on Computer Vision. Springer, 2014, pp. 111–126.</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">J. Yang, J. Wright, T. S. Huang, and Y. Ma, “Image super-resolution via sparse representation,” IEEE International Conference on Image Processing, vol. 19, no. 11, pp. 2861–2873, 2010.</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">S. Schulter, C. Leistner, and H. Bischof, “Fast and accurate image up- scaling with super-resolution forests,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 3791–3799.</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 2, pp. 295–307, 2016.</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">J. Kim, J. Kwon Lee, and K. Mu Lee, “Accurate image super-resolution using very deep convolutional networks,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1646–1654.</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” arXiv preprint arXiv:1704.03915, 2017.</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">S. Anwar, S. Khan, and N. Barnes, “A deep journey into super- resolution: A survey,” arXiv preprint arXiv:1904.07523, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">O. Rippel, J. Snoek, and R. P. Adams, “Spectral representations for convolutional neural networks,” in Advances in Neural Information Processing Systems, 2015, pp. 2449–2457.</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">Y. Wang, C. Xu, S. You, D. Tao, and C. Xu, “Cnnpack: Packing convolutional neural networks in the frequency domain,” in Advances in Neural Information Processing Systems, 2016, pp. 253–261.</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">N. Kumar, R. Verma, and A. Sethi, “Convolutional neural networks for wavelet domain super resolution,” Pattern Recognition Letters, vol. 90, pp. 65–71, 2017.</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">J.Li, S.You, and A.Robles-Kelly, “A frequency domain neural network for fast image super-resolution,” in International Joint Conference on Neural Networks. IEEE, 2018, pp. 1–8.</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">S. Xue, W. Qiu, F. Liu, and X. Jin, “Faster image super-resolution by improved frequency-domain neural networks,” Signal, Image and Video Processing, pp. 1–9, 2019.</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">C. Dong, C. C. Loy, and X. Tang, “Accelerating the super-resolution convolutional neural network,” in European Conference on Computer
Vision. Springer, 2016, pp. 391–407.</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">C. Ledig, L. Theis, F. Husza ́r, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic single image super-resolution using a generative adversarial network,” arXiv preprint arXiv:1609.04802, 2016.</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">T. Dai, J. Cai, Y. Zhang, S.-T. Xia, and L. Zhang, “Second-order atten- tion network for single image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 11 065–11 074.</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">Y. Wang, F. Perazzi, B. McWilliams, A. Sorkine-Hornung, O. Sorkine- Hornung, and C. Schroers, “A fully progressive approach to single-image super-resolution,” in IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 864–873.</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">A. V. Oppenheim, Discrete-time signal processing. Pearson Education India, 1999.</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">K. R. Rao and P. Yip, Discrete cosine transform: algorithms, advantages, applications. Academic press, 2014.</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">R. Clarke, “Relation between the karhunen loeve and cosine transforms,” in IEEE Proceedings (Communications, Radar and Signal Processing), vol. 128, no. 6. IET, 1981, pp. 359–360.</mixed-citation>
                    </ref>
                                    <ref id="ref20">
                        <label>20</label>
                        <mixed-citation publication-type="journal">N. Srivastava, G. E. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting.” Journal of Machine Learning Research, vol. 15, no. 1, pp. 1929–1958, 2014.</mixed-citation>
                    </ref>
                                    <ref id="ref21">
                        <label>21</label>
                        <mixed-citation publication-type="journal">X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in International Conference on Artificial Intelligence and Statistics, 2010, pp. 249–256.</mixed-citation>
                    </ref>
                                    <ref id="ref22">
                        <label>22</label>
                        <mixed-citation publication-type="journal">D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.</mixed-citation>
                    </ref>
                                    <ref id="ref23">
                        <label>23</label>
                        <mixed-citation publication-type="journal">C. Dong, Y. Deng, C. Change Loy, and X. Tang, “Compression artifacts reduction by a deep convolutional network,” in IEEE International Conference on Computer Vision, 2015, pp. 576–584.</mixed-citation>
                    </ref>
                                    <ref id="ref24">
                        <label>24</label>
                        <mixed-citation publication-type="journal">M. Bevilacqua, A. Roumy, C. Guillemot, and M. L. Alberi-Morel, “Low- complexity single-image super-resolution based on nonnegative neighbor embedding,” 2012.</mixed-citation>
                    </ref>
                                    <ref id="ref25">
                        <label>25</label>
                        <mixed-citation publication-type="journal">W. Shi, J. Caballero, F. Husza ́r, J. Totz, A. P. Aitken, R. Bishop, D. Rueckert, and Z. Wang, “Real-time single image and video super- resolution using an efficient sub-pixel convolutional neural network,” in IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1874–1883.</mixed-citation>
                    </ref>
                                    <ref id="ref26">
                        <label>26</label>
                        <mixed-citation publication-type="journal">P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 5, pp. 898–916, 2010.</mixed-citation>
                    </ref>
                                    <ref id="ref27">
                        <label>27</label>
                        <mixed-citation publication-type="journal">J.-B. Huang, A. Singh, and N. Ahuja, “Single image super-resolution from transformed self-exemplars,” in IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 5197–5206.</mixed-citation>
                    </ref>
                                    <ref id="ref28">
                        <label>28</label>
                        <mixed-citation publication-type="journal">Zhou Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, April 2004.</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
