Zgadzam się
Nasza strona zapisuje niewielkie pliki tekstowe, nazywane ciasteczkami (ang. cookies) na Twoim urządzeniu w celu lepszego dostosowania treści oraz dla celów statystycznych. Możesz wyłączyć możliwość ich zapisu, zmieniając ustawienia Twojej przeglądarki. Korzystanie z naszej strony bez zmiany ustawień oznacza zgodę na przechowywanie cookies w Twoim urządzeniu.
Since the advent of deep learning a decade ago convolutional neural networks have been the predominant method for approaching computer vision tasks. However, Transformer model, which has shown significant achievements in the field of natural language processing, is increasingly being applied to computer vision tasks and is demonstrating comparable or superior performance. The article discusses the application of Transformer model to the super-resolution task. The direct application of the original Transformer achieved performance comparable to the contemporary convolutional neural networks. However, the self-attention mechanism, which underpins Transformer model, involves quadratic computational complexity with respect to the size of the input image, presenting a significant challenge for processing high-resolution images. Further research has significantly improved performance, but these improvements are not exhaustive. An overview and comparative analysis of these studies are presented.