Archives

  • 2022-09
  • 2022-08
  • 2022-07
  • 2022-05
  • 2022-04
  • 2021-03
  • 2020-08
  • 2020-07
  • 2018-07
  • br Sensitivity comparison between different datasets br To illustrate the

    2022-08-11


    5.5. Sensitivity comparison between different datasets
    To illustrate the advantages of our proposed dataset, especially the GSK126 of benign pathological images, we performed experiments on different datasets using the same method. The comparison of the average sensitivity of image-wise results using ‘Google's Inception-V3 + SVM’ method between the Bioimaging2015 dataset and our da-taset is shown in Fig. 8. From the figure, it can be seen that after using a larger dataset, each class of sensitivity is improved, especially the classification sensitivity of benign images is significantly improved from 68.7% to 85.1%. Many previous works have described the pro-blem that the classification sensitivity of benign images was relatively low. For example, the method proposed by Araújo et al. describes that the image-wise sensitivity of benign images is only 66.7%, but the image-wise sensitivity of normal, in situ and invasive is 77.8%, 77.8% and 88.9%, respectively, because the characteristics of benign images are not salient, they can be subdivided into many subcategories. Moreover, their characteristics show greater diversity with age. 
    Fig. 8. Comparison of sensitivity uses the same method across different data-sets. The blue and green bars show the average sensitivity of image-wise results using ‘Google's Inception-V3 + SVM’ method on the Bioimaging2015 dataset and our dataset, respectively. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
    Therefore, to accurately classify benign images, more adequate dataset volumes and data diversity are needed to train the algorithm.
    6. Conclusion
    In this article, we proposed a new method for breast cancer pa-thological image classification using a hybrid convolutional and re-current deep neural network. Based on the richer feature representation of the pathological image patches, our method considered the short-term and the long-term spatial correlations between patches through a RNN, which is right behind a richer multilevel CNN feature extractor. Thus, the short-term and long-term spatial correlations between patches were both considered. Through extensive experiments and compar-isons, polymer was shown that our new method outperforms the state-of-the-art method. Additionally, we released a larger and more diverse dataset of breast cancer pathological images to the scientific community. We hope that the dataset can serve as a benchmark to facilitate a broader study of deep learning in the field of breast cancer pathologic images.
    For the future work, to improve the accuracy of classification, outstanding deep learning algorithms and large enough as well as di-verse dataset are indispensable. In terms of algorithms, the use of at-tention mechanisms in deep learning algorithms is a direction that can be tried, because it has achieved outstanding performance in natural image processing. In terms of dataset, larger dataset should be opened like ImageNet to provide a benchmark for the research community. Of course, advances in hardware are equally important. After all, it is ideal to directly use a complete high-resolution image as input to a deep neural network. At the same time, we are trying to extend this approach to whole slide images which will more difficult but will produce greater value in clinical practice.
    Acknowledgements
    References
    [11] P. Filipczuk, T. Fevens, A. Krzyzak, R. Monczak, Computer-aided breast cancer diagnosis based on the analysis of cytological images of fine needle biopsies, IEEE Trans. Med. Imaging 32 (2013) 2169–2178. [12] Y. Zhang, B. Zhang, F. Coenen, W. Lu, Breast cancer diagnosis from biopsy images with highly reliable random subspace classifier ensembles, Mach. Vis. Appl. 24 (2013) 1405–1420.