< Dongguk Open Databases & CNN Model >

 

---------------------------------------------

86. Image Prediction Generative Adversarial Network (IPGAN)

85. Dongguk DAL-Net model for segmentation-based recognition of COVID-19 lesions in chest CT scans

84. Dongguk modified DeblurGAN and CNN for recognition of blurred finger-vein image with motion blurred image database

83. Dongguk DRE-Net model for shoulder implants classification 

82. SSF-Net and TSF-Net models for pigment sign detection

81. Dongguk SLS-Net and SLSR-Net

80. Grouped Dilated Convolution Module (GDCM)-based Semantic Segmentation Network with Algorithm

79. Dongguk MDA-BN Model for Effective Diagnosis of COVID-19 Infection

78. PLS-Net and PLRS-Net models

77. Dongguk Joint-GAN and CNN-LSTM for action recognition

76. AS-RIG (Adaptive Selection to Reconstructed Input Data using a Generator) with algorithm for Person Re-Identification

75. Synthesized Low Light CamVid and KITTI database (Syn-CamVid and Syn-KITTI) and Algorithms Including CNN Models

74. Dongguk DeblurGAN and CNN for Iris Recognition

73. Dongguk Light-weighted Ensemble Network for Robust Diagnosis of COVID19 Pneumonia

72. Dongguk CycleGAN-based Domain Adaptation and DenseNet-based Finger-vein Recognition Models (DCDA&DFRM) with Algorithms

71. Dongguk Nuclei-Net Model (R-NSN) with Algorithms

70. Dongguk Pathological Site Classification Models with Algorithm

69. Dongguk single model both for thermal image super-resolution reconstruction and deblurring, and detection model of object and thermal reflection

68. Dongguk enhanced CycleGAN for age estimation and generated images

67. Dongguk Korean Banknote Database Version1 (DKB v1) with Faster R-CNN model and post processing algorithms

66. Dongguk Face and Body Database Version2 (DFB-DB2) with GAN model, CNN models, and algorithms

65. Dongguk Computer-Aided Framework to Diagnose Tuberculosis from Chest X-Ray Images

64. Dongguk blurred gaze database (DBGD) and CycleGAN model

63. Dongguk Models for Thermal Image Super-resolution Reconstruction and Deblurring

62. Dongguk RPS-Net based retinal pigment sign detection model (DRPM) with Algorithms

61. Dongguk DenseNet-based Finger-vein Recognition Model (DDFRM) with Algorithms

60. CNN model for Thermal Reflection Removal

59. Synthesized Low Light Cambridge-driving Labeled Video Database (Syn-CamVid), Synthesized Low Light Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (Syn-KITTI) database, and Algorithm Including CNN Models

58. Dongguk Drone Motion Blur Dataset - Versions 1 and 2 (DDBD-DB1 and DDBD-DB2) & Pretrained Models

57. Dongguk X-RayNet Model with Algorithms (DXM)

56. Dongguk Mitotic Cell Detection Models (DMM)

55. Dongguk CNN Models for Fake Banknote Image Classification Using Visible-Light Images Captured by Smartphone Camera

54. Dongguk mobile finger wrinkle database versions 1 and 2 (DMFW-DB1 and DMFW-DB2), and GAN with CNN models for motion deblurring

53. Enhanced Ultrasound Thyroid Nodule Classification (US-TNC-V2) Algorithm

52. Dongguk generation model of presentation attack face images (DG_FACE_PAD_GEN)

51. Dongguk Spatiotemporal Features-Based Classification Network (DenseNet+LSTM) to Classify the Multiple Gastrointestinal Diseases with Including the Video Indices of Experimental Endoscopy Videos

50. Dongguk Modified Conditional GAN & Deep CNN Models, and Generated Images

49. Dongguk Super-resolution Reconstruction & Age Estimation CNN Model (DSR&AE-CNN)

48. Dongguk ESSN models and algorithm for Semantic Segmentation

47. Dongguk Mask R-CNN Model for Elimination of Thermal Reflections, Generated Data, Dongguk Thermal Image Database (DTh-DB), and Items and Vehicles Database (DI&V-DB)

46. Dongguk Ultrasound Thyroid Nodule Classification (DUS-TNC) algorithm

45. Dongguk Modified CycleGAN for Age Estimation (DMC4AE) and Generated Images

44. Dongguk Vess-Net Models with Algorithm

43. Dongguk CNN for Detecting Road Markings Based on Adaptive ROI with Algorithms

42. Dongguk CNN stacked LSTM and CycleGAN for Action Recognition, Generated Data, and Dongguk Activities & Actions Database (DA&A-DB2)

41. Label Information of Sun Yat-sen University Multiple Modality Re-ID (SYSU-MM01) Database and Dongguk Gender Recognition CNN Models (DGR-CNN).

40. Dongguk cGAN-based Iris Image Generation Model and Generated Images (DGIM&GI)

39. Dongguk CNN and LSTM models for the classification of multiple gastrointestinal (GI) diseases, and video indices of experimental endoscopic videos

38. Dongguk Dual-Camera-based Gaze Database (DDCG-DB1) and CNN models with Algorithms

37. Dongguk Mobile Finger-Wrinkle Database (DMFW-DB1) and CNN models with Algorithms

36. Dongguk low-resolution drone camera dataset & CNN models

35. Dongguk CNN Model for CBMIR

34. Dongguk Person ReID CNN Models (DPRID-CNN)

33. Dongguk DenseNet-based Finger-vein Recognition Model (DDFRM) with algorithms

32. Dongguk OR-Skip-Net Model for Image Segmentation with Algorithm and Black Skin People (BSP) Label Information

 

 

 

86. Image Prediction Generative Adversarial Network (IPGAN)

 

(1) Introduction

We trained the GAN model using the marathon sub-dataset of the Boston University-thermal infrared video (BU-TIV) benchmark open dataset for the purpose of the image prediction. In the proposed image prediction generative adversarial network (IPGAN) method, converted three-channel thermal images are used as inputs to the IPGAN model. The proposed IPGAN method performs image-to-image translation. We made the model (image prediction generative adversarial network (IPGAN)) open to other researchers.

 

(2) Request for Models

To obtain our pretrained models, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Ja Hyung Koo, Yu Hwan Kim, and Kang Ryoung Park, Image Region Prediction from Thermal Videos Based on Image Prediction Generative Adversarial Network, Mathematics, in submission.

 

 

 

< Request Form for Models >

 

Please complete the following form to request access to our trained models. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

85. Dongguk DAL-Net model for segmentation-based recognition of COVID-19 lesions in chest CT scans

 

(1) Introduction

We proposed a domain-adaptive lightweight network (DAL-Net) for the effective and timely recognition of minimal COVID-19 lesions in chest CT scans. Our DAL-Net model is designed to overcome the performance degradation caused by multi-source datasets. Two open databases named as COVID-19-CT-Seg [1,2] and MosMed [3] are used for experiment. Experimental results show the superior performance of the proposed network over the various state-of-the-art methods.

 

[1] J. Ma et al., Towards dataefficient learning: A benchmark for COVID-19 CT lung and infection segmentation, Med. Phys., vol. 48, no. 3, pp. 11971210, 2021

[2] M. Jun et al., COVID-19 CT lung and infection segmentation dataset, Zenodo. Available online: http://doi.org/10.5281/zenodo.3757476 (accessed on 01 January 2021).

[3] S. P. Morozov et al., Mosmeddata: Chest ct scans with covid-19 related findings dataset, 2020, arXiv:2005.06465.

 

(2) Request for Our Model

To obtain our pretrained DAL-Net model, please fill the request form below and send an email to Mr. Muhammad Owais at malikowais266@gmail.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Muhammad Owais, Na Rae Baek, and Kang Ryoung Park, Domain-Adaptive Lightweight Network for Effective Recognition of Trivial COVID-19 Lesions in Chest Computed Tomography Scans, IEEE Journal of Biomedical and Health Informatics, in submission

 

< Request Form for DAL-Net Model >

 

Please complete the following form to request access to our trained models with algorithm and databases. These models with algorithm and databases should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

84. Dongguk modified DeblurGAN and CNN for recognition of blurred finger-vein image with motion blurred image database

 

(1) Introduction

We proposed a motion blurred finger-vein restoration system with modified Blind Motion Deblurring Using Conditional Adversarial Networks (modified DeblurGAN), and a finger-vein recognition system using DenseNet-161. These systems are designed to overcome the performance degradation caused by motion blur. Two open databases named as SDUMLA-HMT-DB [1] and HKPolyU-DB [2] are used for experiment. Finger-vein restoration and recognition models with motion blurred image database are opened to other researchers for fair judgement.

 

[1] Y. Yin, L. Liu, and X. Sun, "SDUMLA-HMT: A multimodal biometric database", in Proc. 6th Chin. Conf. Biometric Recognit., Beijing, China, Dec. 2011, pp. 260-268.

[2] A. Kumar and Y. Zhou, ‘‘Human identification using finger images,’’ IEEE Trans. Image Process., vol. 21, no. 4, pp. 22282244, Apr. 2012.

 

(2) Request for Our Models with Algorithm and Databases

To obtain our pretrained models with algorithm and databases, please fill the request form below and send an email to Mr. Jiho Choi at choijh1027@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Jiho Choi, Jin Seong Hong, Muhammad Owais, Seung Gu Kim, and Kang Ryoung Park, Restoration of Motion Blurred Image by Modified DeblurGAN for Enhancing the Accuracies of Finger-vein Recognition, Sensors, in submission

 

< Request Form for Models with Algorithm and Databases >

 

Please complete the following form to request access to our trained models with algorithm and databases. These models with algorithm and databases should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

83. Dongguk DRE-Net model for shoulder implants classification

 

(1) Introduction

We proposed DRE-Net, by implementing two spatial feature extraction networks using a densely connected convolution network and a residual neural network, and an SCN for robust classification of different types of shoulder implants. We proposed a rotational invariant augmentation technique used in DRE-Net to achieve state-of-the-art classification performance.

 

(2) Request for Our Models with Algorithm

To obtain our pretrained models, please fill the request form below and send an email to Mr. Haseeb at haseensltn@gmail.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Haseeb Sultan, Muhammad Owais, Chanhum Park, Tahir Mahmood, Adnan Haider, and Kang Ryoung Park, Artificial Intelligence-based Recognition of Different Types of Shoulder Implants in X-Ray Scans Based on Dense Residual Ensemble-Network for Personalized Medicine. Journal of Personalized Medicine, in submission.

 

 

< Request Form for Models with Algorithm >

 

Please complete the following form to request access to our trained models with algorithm. These models with algorithm should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

82. SSF-Net and TSF-Net models for pigment sign detection

 

(1) Introduction

We propose two different fully convolutional networks for computer-aided diagnosis of rare inherited retinitis pigmentosa disease. These networks use spatial information fusion of multiscale features in the form of Single Spatial Fusion Network (SSF-Net) and Triplet Spatial Fusion Network (TSF-Net) for better segmentation of pigment signs. The proposed method provides low-cost ophthalmic screening and analysis opportunities using fundus images. Our proposed networks achieve superior segmentation performance even with minor pigments.

 

 

(2) Request for Our Models with Algorithm

To obtain our pretrained models, please fill the request form below and send an email to Prof. Arsalan at arsal@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Adnan Haider, Chanhum Park, and Kang Ryoung Park, Exploiting the Capabilities of Triplet Spatial Information Fusion for Screening and Analysis of Rare Inherited Retinal Disease to Aid Ophthalmic Diagnosis, Information Fusion, in submission.

 

 

< Request Form for Models with Algorithm >

 

Please complete the following form to request access to our trained models with algorithm. These models with algorithm should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

81. Dongguk SLS-Net and SLSR-Net

 

(1) Introduction

We propose an artificial intelligence-based optic cup and optic disc segmentation method for glaucoma detection in retinal fundus images. Optic cup and optic disc segmentation plays a vital role for computer-aided-diagnosis of glaucoma. We employed separable convolution link and residual skip connections in our architecture. We developed two networks (SLS-Net and SLSR-Net) capable of achieving state-of-the-art segmentation performance using a small number of trainable parameters.

 

 

(2) Request for Our Models with Algorithm

To gain access to our trained models and algorithm, please scan the request form as shown in the bellow description and email to Mr. Adnan Haider (adnanhaider@dgu.ac.kr). Any work that use our data must acknowledge the authors by including the following reference.

 

Adnan Haider, Muhammad Arsalan, Min Beom Lee, Muhammad Owais, Tahir      Mahmood, Haseeb Sultan, and Kang Ryoung Park, "Artificial Intelligence-based Computer-aided Diagnosis of Glaucoma Using Retinal Fundus Images, Expert systems with applications, in submission.

 

 

< Request Form for Models with Algorithm >

 

Please complete the following form to request access to our trained models with algorithm. These models with algorithm should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

80. Grouped Dilated Convolution Module (GDCM)-based Semantic Segmentation Network with Algorithm

 

(1) Introduction

We proposed a grouped dilated convolution module that combines existing grouped convolutions and atrous spatial pyramid pooling techniques, which were trained with two open databases of the Cambridge Driving Labeled Video Database (CamVid) and the Stanford Background Dataset (SBD). The proposed method can learn multi-scale features more simply and effectively than existing methods. Because each convolution group has different dilations in the proposed model, they have receptive fields of different sizes and learn features corresponding to these receptive fields. As a result, multi-scale context can be easily extracted. Moreover, optimal hyper-parameters are obtained from an in-depth analysis, and excellent segmentation performance is derived

 

(2) Request for Our Model

To obtain our trained GDCM-based semantic segmentation network with algorithm, please fill the request form below and send an email to Mr. Dong Seop Kim at k_ds1028@naver.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Dong Seop Kim, Yu Hwan Kim, and Kang Ryoung Park, Semantic Segmentation by Multi-scale Feature Extraction Based on Grouped Dilated Convolution Module, Mathematics, May 2021.

 

 

< Request Form for Model >

 

Please complete the following form to request access to our trained model. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

79. Dongguk MDA-BN Model for Effective Diagnosis of COVID-19 Infection

 

(1) Introduction

We proposed an optimal multilevel deep-aggregated boosted network (MDA-BN) model, which includes a total of 1.76 million trainable parameters. Our method leverages multilevel deep-aggregated features and multistage training via a mutually beneficial approach to maximize the overall CAD performance. Quantitative analysis shows the superior results of our model over various existing methods.

 

(2) Request for Our Model and Dataset Indices

To obtain our trained MDA-BN model, please fill the request form below and send an email to Mr. Muhammad Owais at malikowais266@gmail.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Muhammad Owais, Young Won Lee, Tahir Mahmood, Adnan Haider, Haseeb Sultan, and Kang Ryoung Park, Multilevel Deep-Aggregated Boosted Network to Recognize COVID-19 Infection from Large-Scale Heterogeneous Radiographic Data, IEEE Journal of Biomedical and Health Informatics, in submission 

 

 

< Request Form for Model >

 

Please complete the following form to request access to our trained model with the training and testing data splitting information. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

78. PLS-Net and PLRS-Net models

 

(1) Introduction

We trained our own developed PLRS-Net and PLS-Net semantic segmentation networks for accurate segmentation retinal blood vessel for diagnosis purposes. Both Networks works on pool-less residual convolutional manner to enhance the segmentation accuracy without using expensive preprocessing and deeper networks. Our proposed method is tested over three publicly available vessel segmentation datasets DRIVE, CHASE-DB1, and STARE. The experimental results show that our proposed method outperforms the existing state-of-the-art methods for retinal vessel segmentation. In addition, our proposed method provides an opportunity to the medical practitioners and ophthalmologist for screening and analysis of diabetic and hypertensive retinopathy disease. We made our models open to other researchers.

 

(2) Request for Models

To obtain our pretrained models, please fill the request form below and send an email to Prof. Arsalan at arsal@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Adnan Haider, Young Won Lee, and Kang Ryoung Park, Detecting Retinal Vasculature as a Key Biomarker for Deep Learning-based Intelligent Screening and Analysis of Diabetic and Hypertensive Retinopathy, Expert Systems With Applications, in submission.

 

 

< Request Form for Models >

 

Please complete the following form to request access to our trained models. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

77. Dongguk Joint-GAN and CNN-LSTM for action recognition

 

(1) Introduction

We trained the GAN and CNN-LSTM models with our thermal image database and an open database for the purpose of the joint and skeleton generation of human body. In the proposed joint and skeleton generation method, both original grayscale thermal image and converted color thermal images are used as inputs to the GAN model. The proposed generation method performs image-to-image translation using a GAN model. In addition, our proposed action recognition method recognizes human actions using the generated joint and skeleton images by the GAN model as inputs to a CNN-LSTM model. We made the models (joint and skeleton generation (Joint-GAN), and action recognition (CNN-LSTM)) open to other researchers.

 

(2) Request for Models

To obtain our pretrained models, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Jin Kyu Kang, Dat Tien Nguyen, Tuyen Danh Pham, Muhammad Arsalan, and Kang Ryoung Park, Action Recognition from Thermal Videos Using Joint and Skeleton Information, IEEE Access, in submission.

 

 

 

< Request Form for Models >

 

Please complete the following form to request access to our trained models. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

76. AS-RIG (Adaptive Selection to Reconstructed Input Data using a Generator) with algorithm for Person Re-Identification

 

(1) Introduction

We trained reconstructing model based on a GAN for AS-RIG with DBPerson-Recog-DB1 and SYSU-MM01. For ease of comparison, we developed the proposed algorithm using a model that was made available by other research.

 

(2) Request for trained model and generated images

To gain access to our pre-trained models with algorithm, please sign and scan the request form and email to Mr. Jin Kyu Kang (kangjinkyu@dongguk.edu). Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Jin Kyu Kang, Min Beom Lee, Hyo Sik Yoon, and Kang Ryoung Park, AS-RIG: Adaptive Selection to Reconstructed Input Data using a Generator for Person Re-Identification, IEEE Access, in submission

 

 

< Request Form for Model and Generated Images >

 

Please complete the following form to request access to the DRM. These files should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

75. Synthesized Low Light CamVid and KITTI database (Syn-CamVid and Syn-KITTI) and Algorithms Including CNN Models

 

(1) Introduction

We used synthesized databases similar to actual nighttime images to measure and evaluate the segmentation performance in an extremely low light environment. CamVid and KITTI were used as daytime databases, and synthesized low light CamVid (Syn-CamVid) and KITTI (Syn-KITTI), which converted two daytime databases into low light images, were used as low light databases. We used gamma correction to reduce the brightness nonlinearly. Secondly, blur is generated in images due to a small amount of light and a long camera exposure time at night, and we applied the Gaussian blur filter to implement this phenomenon. Lastly, we generated a noisy image which is similar to the actual nighttime image by adding both Poisson and Gaussian noises.

 

(2) Request for Our Models and Algorithms

To gain access to our datasets and pretrained models with algorithm, please sign and scan the request form and email to Mr. Se Woon Cho at jsu319@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Se Woon Cho, Na Rae Baek, Ja Hyung Koo, and Kang Ryoung Park, Modified Perceptual Cycle Generative Adversarial Network-based Image Enhancement for Improving Accuracy of Low light Image Segmentation, IEEE Access, in submission

 

 

< Request Form for Models and Databases >

Please complete the following form to request access to our trained models and databases. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

                                                      Date:

                                                      Name (signature)

 

 

 

74. Dongguk DeblurGAN and CNN for Iris Recognition

 

(1) Introduction

We trained DeblurGAN models for iris image deblurring with the iris databases of NICE.II and MICHE which were blurred with a random motion blur kernel. We made our trained models and generated images open to other researchers.

 

(2) Request for trained model and generated images

To gain access to our models and images, download the following request form. Please sign and scan the request form and email to Mr. Min Beom Lee (mblee@dongguk.edu). Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Min Beom Lee, Jin Kyu Kang, Hyo Sik Yoon, and Kang Ryoung Park, Enhanced Iris Recognition Method by Generative Adversarial Network-based Image Reconstruction, IEEE Access, in submission.

 

 

< Request Form for Model and Generated Images >

 

Please complete the following form to request access to the DGIM&GI. These files should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

73. Dongguk Light-weighted Ensemble Network for Robust Diagnosis of COVID19 Pneumonia

 

(1) Introduction

We proposed an optimal deep network, which includes a total of 3.16 million trainable parameters. Moreover, the addition of a multilevel activation visualization layer in the proposed network further visualizes the lesion patterns as multilevel color activation maps (ML-CAMs) along with the diagnostic result (either COVID19-positive or -negative). Such additional output as ML-CAMs provides a visual insight of the computer decision and may assist radiologists in validating it, particularly in uncertain situations. Additionally, a novel hierarchical training procedure was adopted to perform the optimal training of our network.

 

(2) Request for Our Model and Dataset Indices

To obtain our trained model and the training and testing data splitting information, please fill the request form below and send an email to Mr. Muhammad Owais at malikowais266@gmail.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Muhammad Owais, Hyo Sik Yoon, Tahir Mahmood, Haseeb Sultan, Adnan Haider and Kang Ryoung Park, Light-weighted Ensemble Network with Multilevel Activation Visualization for Robust Diagnosis of COVID19 Pneumonia from Large-scale Chest Radiographic Database, Applied Soft Computing, in submission 

 

 

< Request Form for Models and Databases Indices>

 

Please complete the following form to request access to our trained model with the training and testing data splitting information. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

72. Dongguk CycleGAN-based Domain Adaptation and DenseNet-based Finger-vein Recognition Models (DCDA&DFRM) with Algorithms

 

(1) Introduction

We propose a finger-vein recognition system with domain adaptation based on cycle-consistent adversarial network. This system is designed to overcome the performance drop caused by heterogeneous data. For extracting feature, modified DenseNet-161 is used. Two open databases named as SDUMLA-HMT-DB [1] and HKPolyU-DB [2] are used for experiment. Finger-vein recognition model and CycleGAN model that used for domain adaptation are opened to other researchers for fair judgement.

 

[1] Y. Yin, L. Liu, and X. Sun, "SDUMLA-HMT: A multimodal biometric database", in Proc. 6th Chin. Conf. Biometric Recognit., Beijing, China, Dec. 2011, pp. 260-268.

[2] A. Kumar and Y. Zhou, ‘‘Human identification using finger images,’’ IEEE Trans. Image Process., vol. 21, no. 4, pp. 22282244, Apr. 2012.

 

(2) Request for algorithm and trained models

To gain access to our dataset, algorithm and trained models, please scan the request form as shown in the bellow description and email to Mr. Kyoung Jun Noh (nohkyungjun@dongguk.edu). Any work that use our data must acknowledge the authors by including the following reference.

 

Kyoung Jun Noh, Ji Ho Choi, Jin Seong Hong, and Kang Ryoung Park, Finger-vein Recognition Using Heterogeneous Databases by Domain Adaption Based on a Cycle-Consistent Adversarial Network, Sensors, in submission.

 

< Request Form for Models and Algorithms >

 

Please complete the following form to request access to our trained models and algorithm. These should not be used for commercial use.

 

Name:

 

Contact: (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

                                                      Date:

                                                      Name (signature)

 

 

71. Dongguk Nuclei-Net Model (R-NSN) with Algorithms

 

(1) Introduction

We propose an artificial intelligence-based nuclei segmentation method for multi-organ histopathology images. Nuclei segmentation plays an important role in cell phenotyping, grading and prognosis of cancer. In our proposed method, we adopt a new nuclei segmentation network which is empowered by residual skip connections. Our method outperforms state-of-the-art methods proposed for nuclei segmentation.

 

(2) Request for algorithm and trained models

To gain access to our algorithm and trained models, please scan the request form as shown in the bellow description and email to Mr. Tahir Mahmood (tahirmahmood@dongguk.edu). Any work that use our data must acknowledge the authors by including the following reference.

 

Tahir Mahmood, Muhammad Owais, Kyoung Jun Noh, Hyo Sik Yoon, Ja Hyung Koo, Adnan Haider, Haseeb Sultan, and Kang Ryoung Park, "Accurate Segmentation of Nuclei Regions with Multi-organ Histopathology Images Using Artificial Intelligence for Cancer Diagnosis in Personalized Medicine, Journal of Personalized Medicine, in submission.

 

< Request Form for Program and Models >

 

Please complete the following form to request access to our program and trained models. These should not be used for commercial use.

 

Name:

 

Contact: (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

                                                      Date:

                                                      Name (signature)

 

 

70. Dongguk Pathological Site Classification Models with Algorithm

 

(1) Introduction

We propose a classification method based on an ensemble of deep learning models to overcome limitations of single-based model for the endoscopic pathological site classification problem. Our algorithm successfully applied for the gastric endoscopic pathological site classification using an open dataset, named Hamlyn-GI dataset [1].

 

[1] Ye, M.; Giannarou, S.; Meining, A.; Yang, G-Z. Online tracking and retargeting with applications to optical biopsy in gastrointestinal endoscopic examinations. Med. Image Anal., 2016, 30, 144-157.

 

 

(2) Request for algorithm and trained models

To gain access to our algorithm and trained models, please scan the request form as shown in the bellow description and email to Mr. D. T. Nguyen (nguyentiendat@dongguk.edu). Any work that use our data must acknowledge the authors by including the following reference.

 

Dat Tien Nguyen, Min Beom Lee, Tuyen Danh Pham, Ganbayar Batchuluun*, Muhammad Arsalan, and Kang Ryoung Park, Enhanced Image-based Endoscopic Pathological Site Classification Using an Ensemble of Deep Learning Models, Sensors, in submission

 

< Request Form for Program and Models >

 

Please complete the following form to request access to our program and trained models. These should not be used for commercial use.

 

Name:

 

Contact: (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

                                                      Date:

                                                      Name (signature)

 

 

69. Dongguk single model both for thermal image super-resolution reconstruction and deblurring, and detection model of object and thermal reflection

 

(1) Introduction

We trained the GAN models with our thermal image database and an open database for the purpose of thermal image reconstruction and object detection. In the proposed reconstruction method, blurry low-resolution image and an original image are used as inputs to the GAN model. The proposed reconstruction method performs super-resolution and deblurring at the same time with a single GAN model. In addition, our proposed detection method detects objects and thermal reflections in thermal images. Both proposed methods use color thermal images as inputs that are converted by using a colormap.  We made the models (reconstruction and detection) open to other researchers.

 

(2) Request for Models

To obtain our pretrained models, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Jin Kyu Kang, Dat Tien Nguyen, Tuyen Danh Pham, Muhammad Arsalan, and Kang Ryoung Park, Deep Learning-based Thermal Image Reconstruction and Object Detection, IEEE Access, in submission.

 

 

 

< Request Form for Models >

 

Please complete the following form to request access to our trained models. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

68. Dongguk enhanced CycleGAN for age estimation and generated

images

 

(1) Introduction

We propose an enhanced CycleGAN for generating facial images of untrained race and ages for age estimation and use it to achieve improved age prediction performance for testing data comprising untrained age ranges and races. And this algorithm is the first to improve age prediction performance by generating data for untrained age ranges and races using an enhanced CycleGAN (an improvement over the existing CycleGAN). By generating data for untrained age ranges and races, the network and overfitting problems for multiple classes and class imbalances are solved. This enhanced CycleGAN was trained with Morph, MegaAge, and AFAD face databases, separately. We make enhanced CycleGAN and generated images publicly available.

 

(2) Request for enhanced CycleGAN model and generated database

To gain access to the generated database and enhanced CycleGAN model, download the following request form. Please scan the request form and email to Mr. Yu Hwan Kim (taekkuon@dongguk.edu). Any work that uses generated images or enhanced CycleGAN model must acknowledge the authors by including the following reference.

 

Yu Hwan Kim, Se Hyun Nam, and Kang Ryoung Park, Enhanced Cycle Generative Adversarial Network for Generating Face Images of Untrained Races and Ages for Age Estimation, IEEE Access, in submission

 

 

< Request Form for database and Models >

 

Please complete the following form to request access to our database and trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

                                                      Date :

                                                      Name (signature)

 

 

67. Dongguk Korean Banknote Database Version1 (DKB v1) with

Faster R-CNN model and post processing algorithms

 

(1) Introduction

The DKB v1 contains eight classes, namely, 10, 50, 100, 500, 1000, 5000, 10000, and 50000 KRW, with each class having 800 images, yielding a total of 6,400 images. The images were photographed using the frontal viewing camera of Galaxy Note 5 [36]. The images of the banknotes were captured from various distances. To reflect the real-world environment as closely as possible, the images were captured under conditions of various locations, lighting, and cases where the bills were randomly folded. The size of the obtained image is 1920 1080 pixels. Furthermore, the experiment was conducted using the open database of JOD to verify whether the proposed algorithm can be applied to various types of banknote images. The JOD open database contains nine classes (i.e., 1 qirsh, 5, 10 piastres, 1/4, 1/2, 1, 5, 10, 20 dinars), yielding a total of 330 images. The size of the obtained image is 3264 2448 pixels. We use these databases with Faster R-CNN and three post processing algorithms. We make DKB v1 with Faster R-CNN model and post processing algorithms publicly available.

 

 

(2) Request for DKB v1 and Faster R-CNN model

To gain access to the DKB v1 with Faster R-CNN model and post processing algorithms, download the following request form. Please scan the request form and email to Mr. Chan Hum Park (pipetsupport@naver.com). Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.

 

Chan Hum Park, Se Woon Cho, Na Rae Baek, Jiho Choi, and Kang Ryoung Park, Deep Feature-based Three-stage Detection of Banknotes and Coins for Assisting Visually Impaired People, IEEE Access, in submission.

 

 

< Request Form for database and Models >

 

Please complete the following form to request access to our database and trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

                                                      Date :

                                            Name (signature)

 

 

 

66. Dongguk Face and Body Database (DFB-DB2) with GAN model, CNN models, and algorithms

 

(1) Introduction

DFB-DB2 was created for the experiments using images of 22 people obtained by two types of cameras to assess the performance of the proposed method in a variety of camera environments. The first camera was a Logitech BCC 950, and the camera specifications include a camera viewing angle of 78°, a maximum resolution of full high-definition (HD) 1080 p, and auto-focusing at 30 frames per second (fps). The second camera was a Logitech C920, and its specifications include a maximum resolution of full HD 1080p, a viewing angle of 78° at 30 fps, and auto focusing. Images were taken in an indoor environment with indoor lights on, and each camera was installed at a height of 2 m 40 cm. The database was divided into two categories according to the camera. In the first database, the images were captured by the Logitech BCC 950, and the second database is composed of the images obtained by the Logitech C920, and the angle of camera was similar to that for capturing the first database. And DFB-DB2 is different from DFB-DB1, and DFB-DB2 contains blur images which are not included in DFB-DB1

In addition, we make our GAN model, two CNN models trained with DFB-DB2 and open database of ChokePoint database [1], and our algorithms publicly available.

 

1. ChokePoint Database. Available online: http://arma.sourceforge.net/chokepoint/ (accessed on 20 June 2020).

 

(2) Request for DFB-DB2 with GAN model, CNN model, and algorithms

To gain access to DFB-DB2 with GAN model, CNN model and algorithms, download the following request form. Please scan the request form and email to Mr. Ja Hyung Koo (koo6190@naver.com).

Any work that uses this DFB-DB2 with GAN model, CNN model and algorithms must acknowledge the authors by including the following reference.

 

Ja Hyung Koo, Se Woon Cho, Na Rae Baek, and Kang Ryoung Park, Face and Body-based Human Recognition by GAN-based Blur Restoration, Sensors, in submission.

 

 

< DFB-DB2, GAN model, and CNN model Request Form >

 

Please complete the following form to request access to the DFB-DB2, GAN model, and CNN model (All contents must be completed). This database, GAN model, and CNN model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

65. Dongguk Computer-Aided Framework to Diagnose Tuberculosis from Chest X-Ray Images

 

(1) Introduction

We proposed a novel deep learning-based computer-aided framework to diagnose Tuberculosis from a given CXR image and provide the appropriate visual and descriptive information from a previous database. Such information can further assist radiologists to subjectively validate the computer decision. Thus, both subjective and computer decisions will validate each other and ultimately result in effective diagnosis and treatment.

 

(2) Request for Our Model and Dataset Indices

To obtain our trained model and the training and testing data splitting information, please fill the request form below and send an email to Mr. Muhammad Owais at malikowais266@gmail.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Muhammad Owais, Muhammad Arslan, Tahir Mahmood, Yu Hwan Kim, and Kang Ryoung Park, Mining-based Diagnosis: A Comprehensive Computer-Aided Framework to Diagnosis Tuberculosis from Chest X-Ray Images based on Multi-Scale Information Fusion, Journal of Medical Internet Research, in submission. 

 

 

< Request Form for Models and Databases Indices>

 

Please complete the following form to request access to our trained model with including the video indices of experimental endoscopy videos. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

64. Dongguk blurred gaze database (DBGD) and CycleGAN model

 

(1) Introduction

The blurred gaze database [Dongguk blurred gaze database (DBGD)] is constructed from the images of 26 drivers by dual near-infrared (NIR) light cameras with illuminators in a vehicle environment, and classified into 16 situations such as wearing of sunglasses, different glasses, and hats with mobile phones. We make DBGD and our CycleGAN model trained with this database open to other researchers.

 

(2) Request for DBGD and CycleGAN model

To gain access to the DBGD with CycleGAN model, download the following request form. Please scan the request form and email to Mr. Hyo Sik Yoon (yoonhs@dongguk.edu).

Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.

 

Hyo Sik Yoon and Kang Ryoung Park, CycleGAN-based Deblurring for Gaze Tracking in Vehicle Environments, IEEE Access, in submission.

 

 

< Request Form for database and Models >

 

Please complete the following form to request access to our database and trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

                                                      Date :

                                            Name (signature)

 

 

 

63. Dongguk Models for Thermal Image Super-resolution Reconstruction and Deblurring

 

(1) Introduction

We trained the GAN models with our thermal image database and an open database for the purpose of thermal image reconstruction. In the proposed super-resolution method, low resolution image and an original image are used as inputs to the GAN model. In the proposed deblurring method, blurred image and an original image are used as inputs to the GAN model. We made the models (super-resolution reconstruction and deblurring) open to other researchers.

 

(2) Request for Models

To obtain our pretrained models, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Young Won Lee, Dat Tien Nguyen, Tuyen Danh Pham, and Kang Ryoung Park, Thermal Image Reconstruction Using Deep Learning, IEEE Access, in submission.

 

 

 

< Request Form for Models >

 

Please complete the following form to request access to our trained model. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

 

62. Dongguk RPS-Net based retinal pigment sign detection model (DRPM) with Algorithms

 

(1) Introduction

In this study, we proposed an accurate retinal pigment segmentation network (RPS-Net) that segment the pigment signs for diagnostic purposes. RPS-Net is a specifically designed deep learning-based semantic segmentation network to accurately detect and segment the pigment signs with fewer trainable parameters. Compared with the conventional deep learning methods, the proposed method applies a feature enhancement policy through multiple dense connections between the convolutional layers, which enables the network to discriminate between normal and diseased eyes, and accurately segment the diseased area from the background.

 

(2) Request for Models

To gain access to our databases and pretrained models with algorithm, Please sign and scan the request form and email to Mr. Muhammad Arsalan at arsal@dongguk.edu. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Na Rae Baek, Muhammad Owais, Tahir Mahmood, and Kang Ryoung Park, "Deep Learning-based Detection of Pigment Signs for Analysis and Diagnosis of Retinitis Pigmentosa ", Sensors, in submission.

 

< Request Form for Models >

 

Please complete the following form to request access to our trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

                                                      Date :

                                            Name (signature)

 

 

 

61. Dongguk DenseNet-based Finger-vein Recognition Model (DDFRM) with Algorithms

 

(1) Introduction

In this study, we proposed a finger-vein recognition system based on score-level fusion method with shape and texture images. For extracting the matching score of each shape image and texture image, revised DenseNet-161 with composite image input is used. Finger-vein recognition models trained with our experimental databases in this study are made available to other researchers for a fair judgment on the performance.

 

(2) Request for Models

To get access to our pretrained models with algorithms please sign and scan the request form and send an email to Mr. Kyoung Jun Noh at nohkyungjun@dongguk.edu. Any work that uses our models and algorithm must acknowledge the authors by including the following reference.

 

Kyoung Jun Noh, Jiho Choi, Jin Seong Hong and Kang Ryoung Park, Finger-vein Recognition Based on Densely Connected Convolutional Network Using Score-Level Fusion with Shape and Texture Images, IEEE Access, Vol. 8, pp. 96748-96766, June 2020.

 

 

< Request Form for Models >

 

Please complete the following form to request access to our trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

60. CNN model for Thermal Reflection Removal

 

(1) Introduction

We trained the CNN model with our thermal image database and an open database for the purpose of thermal reflection removal. In the proposed method, a region image and an original image are used as inputs to the CNN models. We made the models (pruned fully convolutional network (PFCN)) open to other researchers.

 

(2) Request for Models

To obtain our pretrained models please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Na Rae Baek, Dat Tien Nguyen, Tuyen Danh Pham, and Kang Ryoung Park, Region-based Removal of Thermal Reflection using Pruned Fully Convolutional Network, IEEE Access, Vol. 8, pp. 75741-75760, May 2020.

 

< Request Form for Models >

 

Please complete the following form to request access to our trained models. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

59. Synthesized Low Light Cambridge-driving Labeled Video Database (Syn-CamVid), Synthesized Low Light Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago (Syn-KITTI) database, and Algorithm Including CNN Models

 

(1) Introduction

We used synthesized databases that are similar to real low light environments to perform multi-class segmentation in low light environments. Images taken in real low light or nighttime environments have poor image quality and visibility due to low brightness, blur, and noise, making it difficult for humans to create segmentation labels for all the objects in the image and the labels are not accurate. Therefore, to utilize accurate segmentation labels and paired images, experiments were performed using the Syn-CamVid and Syn-KITTI databases, which are the results of converting the daytime CamVid and KITTI databases into low light images, respectively. To create extremely low light images similar to an actual low light environment with little external light, we have used the existing low light image generation methods in combination. In a real low light environment with little external light, the brightness value does not decrease linearly. When comparing the daytime image with the nighttime image, the brightness of highly bright pixels will decrease more, whereas that of the pixels with lower brightness will decrease less. We used gamma correction to produce this nonlinear brightness change. In a low light environment, blurry images are captured due to the amount of light and the cameras exposure time, and we used the Gaussian blur kernel to implement this effect. Finally, the noise in the low light image is generated by the camera sensor, which is added in this experiment using the Gaussian and Poisson noise functions.

 

(2) Request for Our Models and Algorithms

To gain access to our datasets and pretrained models with algorithm, please sign and scan the request form and email to Mr. Se Woon Cho at jsu319@dongguk.edu. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Se Woon Cho, Na Rae Baek, Ja Hyung Koo, Muhammad Arsalan, and Kang Ryoung Park, "Semantic Segmentation with Low Light Images by Modified CycleGAN-based Image Enhancement", IEEE Access, Vol. 8, pp. 93561-93585, June 2020.

 

< Request Form for Models and Databases >

Please complete the following form to request access to our trained models and databases. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

58. Dongguk Drone Motion Blur Dataset - Versions 1 and 2 (DDBD-DB1 and DDBD-DB2) & Pretrained Models

 

(1) Introduction

We used the Dongguk drone camera dataset ver.2 (DDroneC-DB2) open dataset to generate two datasets by two different methods, denoted as the synthesized motion blur drone database 1 (SMBD-DB1) and synthesized motion blur drone database 2 (SMBD-DB2). For the first dataset, the motion-blurred images were generated by applying the motion-blurring kernels, which are created by applying subpixel interpolation to the trajectory vector. Each trajectory vector, which is a complex-valued vector, corresponds to the discrete positions of an object undergoing 2D random motion in a continuous domain. For the second dataset, we synthesized dataset that contains realistic motion blur close to the motion blur in the wild. Specifically, we used a video frame interpolation model to increase the frame rate of DDroneC-DB2 videos from 30 to 120 FPS. Then, we generated blurred images by averaging consecutive frames on the generated high-frame-rate videos. With these two datasets, we performed proposed deblur CNN and marker detection CNN. We made our synthesized datasets and CNN models publicly available for fair comparison and result regeneration.

 

(2) Request for Our Models and Algorithms

To gain access to our datasets and pretrained models with algorithm, please sign and scan the request form and email to Prof. Tuyen Danh Pham at phamdanhtuyen@gmail.com. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Noi Quang Truong, Young Won Lee, Muhammad Owais, Dat Tien Nguyen, Ganbayar Batchuluun, Tuyen Danh Pham*, and Kang Ryoung Park, SlimDeblurGAN-based Motion Deblurring and Marker Detection for Autonomous Drone Landing, Sensors, in submission.

 

< Request Form for Models and Databases >

Please complete the following form to request access to our trained models and databases. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

57. Dongguk X-RayNet Model with Algorithms (DXM)

 

(1) Introduction

In this study, semantic segmentation based automatic cardiothoracic ratio (CTR) estimation is proposed. CTR is important to diagnose cardiac and other related diseases. The proposed method consists of two multiclass segmentation networks (X-RayNet1 and X-RayNet2) to provide accurate boundary of chest anatomical structures such as, lungs, heart and clavicle bones. The accurate boundary segmentation of these anatomies helps to compute the CTR automatically, where the CTR is considered as biomarker for cardiomegaly and other diseases. Three publicly available datasets Japanese Society of Radiological Technology (JSRT), Montgomery County (MC) and Shenzhen X-ray sets (SC) where used to evaluate the performance of proposed network. The experimental results show that our method outperforms the existing approaches and provide accurate boundary for CTR computation. We made our models publicly available for fair comparison and result regeneration. All the experiments are implemented in MATLAB R2019a. 

 

(2) Request for Our Models and Algorithms

To gain access to our databases and pretrained models with algorithm, Please sign and scan the request form and email to Mr. Muhammad Arsalan at arsal@dongguk.edu. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Muhammad Owais, Tahir Mahmood, Jiho Choi, and Kang Ryoung Park, "Artificial Intelligence-based Diagnosis of Cardiac and Related Diseases ", Journal of Clinical Medicine, Vol. 9, Issue 3(871), pp. 1-27, March 2020.

 

< Request Form for Models and Algorithms >

Please complete the following form to request access to our trained models and algorithms. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

56. Dongguk Mitotic Cell Detection Models (DMM)

 

(1) Introduction

In this study, we proposed a multistage mitosis detection method based on Faster region convolutional neural network (Faster R-CNN) and deep CNNs. Two open datasets of breast cancer known as ICPR 2012 and MITOS-ATYPIA-14 are used. Our proposed technique outperforms over the existing techniques. We made our models publicly available to allow other researchers to regenerate our results and do fair comparisons. All the experiments are implemented in MATLAB R2019a. 

 

(2) Request for Our Models and Algorithms

To gain access to our databases and pretrained models with algorithm, Please sign and scan the request form and email to Mr. Tahir Mahmood at tahirmahmood@dongguk.edu. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Tahir Mahmood, Muhammad Arsalan, Muhammad Owais, Min Beom Lee, and Kang Ryoung Park, "Artificial Intelligence-based Mitosis Detection in Breast Cancer Histopathology Images Using Faster R-CNN and Deep CNNs", Journal of Clinical Medicine, Vol. 9, Issue 3(749), pp. 1-25, March 2020.

 

< Request Form for Models and Algorithms >

Please complete the following form to request access to our trained models and algorithms. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

55. Dongguk CNN Models for Fake Banknote Image Classification Using Visible-Light Images Captured by Smartphone Camera

 

(1) Introduction

In this study, we proposed a fake banknote classification method using CNN on banknote images captured by smartphone cameras under visible-light conditions. The fake banknote dataset used for the experiments in this study consists of images of banknotes of three national currencies: EUR (EUR 5, EUR 10, EUR 20, EUR 50, and EUR 100), USD (USD 1, USD 5, USD 10, USD 20, USD 50, and USD 100), and KRW (KRW 1000, KRW 5000, KRW 10,000, and KRW 50,000). The fake banknotes were created by capturing the original banknotes by scanner and smartphone cameras, and printed out by color printer to make the reproduced banknote. We subsequently captured banknote images by the same abovementioned smartphones while holding the fake and genuine banknotes in front of cameras or placing them on tables. The training process were conducted using the MATLAB implementation of CNN with AlexNet, ResNet-18, and GoogleNet architectures.

 

(2) Request for Our Models and Algorithm

To gain access to these files, download the following request form. Please scan the request form and email to Dr. Tuyen Danh Pham (phamdanhtuyen@dongguk.edu). Any work that uses these files with algorithm must acknowledge the authors by including the following reference.

 

Tuyen Danh Pham, Chanhum Park, Dat Tien Nguyen, Ganbayar Batchuluun, and Kang Ryoung Park, Deep Learning-Based Fake-Banknote Detection Using Visible-Light Images Captured by Smartphone Cameras, IEEE Access, Vol. 8, pp. 63144-63161, April 2020.

 

 

 

< Request Form for Models, Algorithm, and Databases >

Please complete the following form to request access to our trained models and algorithm. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

54. Dongguk mobile finger wrinkle database versions 1 and 2 (DMFW-DB1 and DMFW-DB2), and GAN with CNN models for motion deblurring

 

(1) Introduction

To evaluate performance using images captured by a variety of smartphone cameras, DMFW-DB2 used the rear camera of a Samsung Galaxy S8+. Later, images were extracted from the captured images at 30 fps, and the motion blurred images were captured by obtaining the average images of the captured images. In addition, DMFW-DB1 (refer to 37) is artificial blurred by motion blurring kernel. This study used DeblurGAN to restore motion blurred images of DMFW-DB1 and DMFW-DB2. The restored images obtained by DeblurGAN are used as the input for a ResNet-101 to perform the finger wrinkle recognition.

 

(2) Request for Our Models, Algorithm, and Databases

To gain access to our databases and pretrained models with algorithm, Please sign and scan the request form and email to Mr. Nam Sun Cho at diko93@dongguk.edu. Any work that uses our models, algorithm, and databases must acknowledge the authors by including the following reference.

 

Nam Sun Cho, Chan Sik Kim, Chanhum Park, and Kang Ryoung Park, "GAN-based Blur Restoration for Finger Wrinkle Biometrics System", IEEE Access, Vol. 8, pp. 49857- 49872, March 2020.

 

 

< Request Form for Models and Algorithm >

 

Please complete the following form to request access to our trained models and algorithm. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

53. Enhanced Ultrasound Thyroid Nodule Classification (US-TNC-V2) Algorithm

 

(1) Introduction

In this study, we enhance the classification performance of ultrasound image-based thyroid nodule classification system. The pretrained model was successfully trained using TDID dataset [1]

 

[1] Pedraza, L.; Vargas, C.; Narvaez, F.; Duran, O.; Munoz, E.; Romero, E. An open access thyroid ultrasound-image database. In Proceedings of the 10th International Symposium on Medical Information Processing and Analysis, Colombia, 28 January, 2015 (in SPIE Proceedings, Vol. 9287, pp. 1-6).

 

(2) Request for Our Models and Algorithm

To gain access to our algorithm and pretrained models, Please sign and scan the request form and email to Prof. D. T. Nguyen at nguyentiendat@dongguk.edu. Any work that uses our algorithm must acknowledge the authors by including the following reference.

 

D. T. Nguyen, et al. "Ultrasound Image-based Diagnosis of Malignant Thyroid Nodule Using Artificial Intelligence", Sensors, Vol. 20, Issue 7(1822), pp. 1-23, March 2020.

 

 

< Request Form for Models and Algorithm >

 

Please complete the following form to request access to our trained models and algorithm. These models should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

52. Dongguk generation model of presentation attack face images (DG_FACE_PAD_GEN)

 

(1) Introduction

We trained our generative adversarial network (GAN)-based model to artificially generate presentation attack (PA) face images to reduce the efforts of PA image acquisition.

 

(2) Request for obtaining DG_FACE_PAD_GEN

 

To obtain our pretrained model, please fill the request form bellow and send an email to Mr. Nguyen at nguyentiendat@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Dat Tien Nguyen, Tuyen Danh Pham, Ganbayar Batchuluun, Kyoung Jun Noh, and Kang Ryoung Park, Presentation Attack Face Image Generation Based on Deep Generative Adversarial Network, Sensors, Vol. 20, Issue 7(1810), pp. 1-24, March 2020.

 

 

< Request Form for DG_FACE_PAD_GEN >

 

Please complete the following form to request access to the DG_FACE_PAD_GEN. These files should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

51. Dongguk Spatiotemporal Features-Based Classification Network (DenseNet+LSTM) to Classify the Multiple Gastrointestinal Diseases with Including the Video Indices of Experimental Endoscopy Videos

 

(1) Introduction

We trained a spatiotemporal features-based classification model (named as DenseNet+LSTM) to classify the multiple gastrointestinal diseases using endoscopic videos. Moreover, after performing the classification, the extracted features were further used to retrieve images of similar medical conditions, such as normal and abnormal cases, from a large endoscopic database.

 

(2) Request for Our Algorithm and Dataset Indicies

To obtain our trained model with including the video indices of experimental endoscopy videos, please fill the request form below and send an email to Mr. Muhammad Owais at malikowais266@gmail.com. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Muhammad Owais, Muhammad Arsalan, Tahir Mahmood, Jin Kyu Kang and Kang Ryoung Park, Automated Diagnosis of Various Gastrointestinal Lesions Using Deep Learning-Based Classification and Retrieval Framework with Large Endoscopic Database, Journal of Medical Internet Research, In Submission.

 

 

 

< Request Form for Models and Databases Indices>

 

Please complete the following form to request access to our trained model with including the video indices of experimental endoscopy videos. This model should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

                                            Name (signature)

 

 

 

50. Dongguk Modified Conditional GAN & Deep CNN Models, and Generated Images

 

(1) Introduction

We trained our modified conditional GAN & Deep CNN Models for finger-vein optical blur restoration and finger-vein recognition by databases of PolyU-DB [1] and SDU-DB [2]. We made our trained models and generated images open to other researchers.

 

[1] Kumar, A.; Zhou, Y. Human identification using finger images. IEEE Trans. Image Process. 2012, 21, 22282244.

[2] SDUMLA-HMT Finger Vein Database. Available online: http://mla.sdu.edu.cn/info/1006/1195.htm

 

(2) Request for Models and Images

To gain access to our models and generated images, download the following request form. Please fill the request form below and send an email to Mr. Jiho Choi (choijh1027@dongguk.edu). Any work that uses our algorithm and models must acknowledge the authors by including the following reference.

 

Jiho Choi, Kyoung Jun Noh, Se Woon Cho, Se Hyun Nam, Muhammad Owais, and Kang Ryoung Park, “Modified Conditional Generative Adversarial Network-based Optical Blur Restoration for Finger-vein Recognition,” IEEE Access, Vol. 8, pp. 16281- 16301, January 2020.

 

 

< Request Form for Pretrained Models, Algorithm, and Images>

 

Please complete the following form to request access to our pretrained models, algorithms, and images. These should not be used for commercial use.

 

Name :

 

Contact : (Email)

                 (Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

Date :

 

                                                             Name (signature)

 

 

 

49. Dongguk Super-resolution Reconstruction & Age Estimation CNN Model (DSR&AE-CNN)

 

(1) Introduction

We trained our models (DSR&AE-CNN) for facial image super-resolution reconstruction and age estimation by databases of PAL [1] and MORPH databases [2].We made our trained models and generated images open to other researchers.

 

[1] PAL database Available online: http://agingmind.utdallas.edu/download-stimuli/face-database/ (accessed on 17 May 2019).

[2] MORPH database Available online: https://ebill.uncw.edu/C20231_ustores/web/store_main.jsp?STOREID=4 (accessed on 17 May 2019).

 

(2) Request for Models

To gain access to our models and images, download the following request form. Please fill the request form below and send an email to Mr. Se Hyun Nam (nsh6473@dongguk.edu). Any work that uses our algorithm and models must acknowledge the authors by including the following reference.

 

Se Hyun Nam, Yu Hwan Kim, Noi Quang Truong,  jiho Choi, and Kang Ryoung Park, Age Estimation by Super-Resolution Reconstruction Based on Adversarial Networks, IEEE Access, Vol. 8, pp. 17103-17120, January 2020.

 

< Request Form for Pretrained Models and Algorithm>

 

Please complete the following form to request access to our pretrained models. This models should not be used for commercial use.

 

Name :

 

Contact : (Email)

              (Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

Date :

 

               Name (signature)

 

 

48. Dongguk ESSN models and algorithm for Semantic Segmentation

 

(1) Introduction

We propose new models (ESSN) for semantic segmentation. That proposed model was trained with open dataset, SBD [1] and CamVid [2]. We made our trained model and algorithm open to other researchers.

 

[1] S. Gould, R. Fulton, and D. Koller, Decomposing a Scene into Geometric and Semantically Consistent Regions, in Proc. IEEE Int. Conf. Comput. Vis., Kyoto, Japan, 29 Sep.-2 Oct. 2009, pp. 1-8.

[2] G. J. Brostow, J. Shotton, J. Fauqueur, and R. Cipolla, Segmentation and Recognition Using Structure from Motion Point Clouds, in Proc. European Conf. Comput. Vis., Marseille, France, 12-18 Oct. 2008, pp. 44-57.

 

(2) Request for Models

To obtain our pretrained model, please fill the request form below and send an email to Mr. Dong Seop Kim (seob2@dongguk.edu). Any work that uses our algorithm and models must acknowledge the authors by including the following reference.

 

DONG SEOP KIM, MUHAMMAD ARSALAN, MUHAMMAD OWAIS, and KANG RYOUNG PARK, ESSN: Enhanced Semantic Segmentation Network by Residual Concatenation of Feature Maps, IEEE Access, Vol. 8, pp. 21363-21379, February 2020.

 

< Request Form for Pretrained Models and Algorithm>

 

Please complete the following form to request access to our pretrained models. This models should not be used for commercial use.

 

Name :

 

Contact : (Email)

              (Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

Date :

 

               Name (signature)

 

 

 

47. Dongguk Mask R-CNN Model for Elimination of Thermal Reflections, Generated Data, Dongguk Thermal Image Database (DTh-DB), and Items and Vehicles Database (DI&V-DB)

 

(1) Introduction

We trained the Mask R-CNN model with our thermal image database for the purpose of elimination of thermal reflections. We made the models, generated data with Dongguk thermal image database (DTh-DB), and Dongguk items & vehicles database (DI&V-DB) open to other researchers.

 

(2) Request for Models, Generated Data, and databases

To obtain our pretrained model, generated data, and databases, please fill the request form below and send an email to Prof. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Hyo Sik Yoon, Dat Tien Nguyen, Tuyen Danh Pham, and Kang Ryoung Park, A Study on the Elimination of Thermal Reflections, IEEE Access, Vol. 7, pp. 174597-174611, December 2019.

 

< Request Form for Models, Generated Data, and Databases >

 

Please complete the following form to request access to our pretrained model, generated data, and database (All contents must be completed). This model, data, and database should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

46. Dongguk Ultrasound Thyroid Nodule Classification (DUS-TNC) algorithm

 

(1) Introduction

In this study, we enhance the classification performance of ultrasound image based thyroid nodule classification system by cascading classifiers using FFT-based and CNN-based methods. The pretrained model was successfully trained using TDID dataset [1]. We made our trained model and algorithm open to other researchers

 

[1] Pedraza, L.; Vargas, C.; Narvaez, F.; Duran, O.; Munoz, E.; Romero, E. An open access thyroid ultrasound-image database. In Proceedings of the 10th International Symposium on Medical Information Processing and Analysis, Colombia, 28 January, 2015 (in SPIE Proceedings, Vol. 9287, pp. 1-6).

 

(2) Request for our algorithm

To gain access to our algorithm (code and pretrained models), Please sign and scan the request form and email to Prof. D. T. Nguyen at nguyentiendat@dongguk.edu. Any work that uses our algorithm must acknowledge the authors by including the following reference.

 

Dat Tien Nguyen, Tuyen Danh Pham, Ganbayar Batchuluun, Hyo Sik Yoon, and Kang Ryoung Park, Artificial Intelligence-based Thyroid Nodule Classification Using Information from Spatial and Frequency Domains, Journal of Clinical Medicine, Vol.  8, Issue 11(1976), pp. 1-24, November 2019.

 

 

< Request Form for DUS-TNC algorithm >

 

Please complete the following form to request access our algorithm (All contents must be completed). These models should not be used for commercial use.

 

Name:

 

Contact:  (Email)

          (Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

                                                         Date:

                                                         Name (signature)

 

 

45. Dongguk Modified CycleGAN for Age Estimation (DMC4AE) and Generated Images

 

(1) Introduction

We trained our modified CycleGAN models for age estimation with heterogeneous databases of MegaAge and MORPH databases [1,2]. We made our trained models and generated images by modified CycleGAN open to other researchers.

 

1.    Y. Zhang, L. Liu, C. Li, and C.C. Loy, Quantifying facial age by posterior of age comparisons, In Proceedings of British Machine Vision Conference, London, UK, 4-7 September 2017; pp. 1-14.

2.    K. Ricanek and T. Tesafaye, Morph: A longitudinal image database of normal adult age-progression, In Proceedings of 7th International Conference on Automatic Face and Gesture Recognition, Southampton, UK, 10-12 April 2006; pp 341345.

 

(2) Request for our models and images

To gain access to our models and images, download the following request form. Please sign and scan the request form and email to Mr. Yu Hwan Kim (taekkuon@dongguk.edu).

 

Any work that uses these models and images must acknowledge the authors by including the following reference.

 

Yu Hwan Kim, Min Beom Lee, Se Hyun Nam, and Kang Ryoung Park, Enhancing the Accuracies of Age Estimation with Heterogeneous Databases Using Modified CycleGAN, IEEE Access, Vol. 7, pp. 163461-163477, November 2019.

 

< Request Form for DMC4AE and Generated Images >

 

Please complete the following form to request access to these models with images (All contents must be completed). These models should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

44. Dongguk Vess-Net Models with Algorithm

 

(1) Introduction

We trained our Vess-Net model based on dual stream feature empowerment scheme for retinal vessel segmentation to aid the process of diagnosing diseases like diabetic and hypertensive retinopathy. In our experiments we validated the performance of our method with three different publicly available fundus image databases including DRIVE [1] CHASE-DB1 [2] and STARE [3]. We made our trained models open to other researchers.

 

3.    Gastrolab Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging, 2004, 23, 501509.

4.    Fraz, M.M.; Remagnino, P.; Hoppe, A.; Uyyanonvara, B.; Rudnicka, A.R.; Owen, C.G.; Barman, S.A. An ensemble classification-based approach applied to retinal blood vessel segmentation. IEEE Trans. Biomed. Eng. 2012, 59, 25382548

5.    Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Image, 2000, 19, 203-210.

 

(2) Request for our Vess-Net models

To gain access to the Vess-Net trained models, download the following request form. Please sign and scan the request form and email to Mr. Muhammad Arsalan (arsal@dongguk.edu).

 

Any work that uses these Vess-Net models must acknowledge the authors by including the following reference.

 

Muhammad Arsalan, Muhammad Owais, Tahir Mahmood, Se Woon Cho and Kang Ryoung Park, Aiding the Diagnosis of Diabetic and Hypertensive Retinopathy Using Artificial Intelligence-based Semantic Segmentation, Journal of Clinical Medicine, Vol.  8, Issue 9(1446), pp. 1-27, September 2019.

 

< Request Form for Vess-Net Models >

 

Please complete the following form to request access to these models (All contents must be completed). These models should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

 

43. Dongguk CNN for Detecting Road Markings Based on Adaptive ROI with Algorithms

(1) Introduction

We created adaptive ROI images before using them to train our convolutional neural network (CNN). In the first stage, a vanishing point is detected in order to create the ROI image. The ROI image that covers the majority of the road region is then used as the input to train the CNN-based detector and classifier in the second stage. We made the models, generated data, and labeled information of database open to other researchers. Our CNN model was trained with Malaga urban dataset [1], the Daimler dataset [2], and the Cambridge dataset [3].

 

1. The Málaga Stereo and Laser Urban Data Set MRPT. Available online: https://www.mrpt.org/MalagaUrbanDataset (accessed on 1 October 2018).

2. Daimler Urban Segmentation Dataset.

Available online: http://www.6d-vision.com/scene-labeling (accessed on 2 January 2019).

3. Cambridge-driving Labeled Video Database (CamVid). Available online: http://mi.eng.cam.ac.uk/research/projects/VideoRec/CamVid/ (accessed on 1 October 2018).

 

(2) Request for models, generated data, and labeled information

To obtain our pretrained model, generated data, and labeled information, please fill the request form bellow and send an email to Dr. Toan Minh Hoang at hoangminhtoan@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Toan Minh Hoang, Se Hyun Nam, and Kang Ryoung Park, Enhanced Detection and Recognition of Road Markings Based on Adaptive Region of Interest and Deep Learning,IEEE Access, Vol. 7, pp. 109817- 109832, August 2019.

 

 

 

< Request Form for Models, Generated Data, and Labeled Information >

 

Please complete the following form to request access to our pretrained model, generated data, and labeled information of database (All contents must be completed). This model, data, and database should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

42. Dongguk CNN stacked LSTM and CycleGAN for Action Recognition, Generated Data, and Dongguk Activities & Actions Database (DA&A-DB2)

 

(1) Introduction

We trained our convolutional neural network (CNN), CNN stacked with long short-term memory (CNN-LSTM), cycle-consistent adversarial network (CycleGAN) models with our action database. We made the models, generated data, and database open to other researchers.

 

(2) Request for Models, Generated Data, and DA&A-DB2

To obtain our pretrained model, generated data, and database, please fill the request form bellow and send an email to Dr. Batchuluun at ganabata87@dongguk.edu. Any work that uses the provided pretrained network must acknowledge the authors by including the following reference.

 

Ganbayar Batchuluun, Dat Tien Nguyen, Tuyen Danh Pham, Chanhum Park, and Kang Ryoung Park, Action Recognition from Thermal Videos, IEEE Access, Vol. 7, pp. 103893- 103917, August 2019.

 

 

 

< Request Form for Models, Generated Data, and DA&A-DB2 >

 

Please complete the following form to request access to our pretrained model, generated data, and database (All contents must be completed). This model, data, and database should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

41. Label Information of Sun Yat-sen University Multiple Modality Re-ID (SYSU-MM01) Database and Dongguk Gender Recognition CNN Models (DGR-CNN).

 

(1) Introduction

We collected gender information of Sun Yat-sen University Multiple Modality Re-ID (SYSU-MM01) database and trained gender recognition system based on ResNet-101 using two databases including the SYSU-MM01 and the Dongguk Body-based Gender Database (DBGender-DB2). We made label information of SYSU-MM01 database and Dongguk Gender Recognition CNN (DGR-CNN) open to other researchers.

 

(2) Request for Label Information and DGR-CNN

To gain access to the label information and DGR-CNN, download the following request form for label information of SYSU-MM01 and DGR-CNN. Please sign and scan the request form and email to Ms. Na Rae Baek (naris27@dongguk.edu).

 

Any work that uses the label information of SYSU-MM01 database or this CNN model must acknowledge the authors by including the following reference.

Na Rae Baek, Se Woon Cho, Ja Hyung Koo, Noi Quang Truong, and Kang Ryoung Park, Multimodal Camera-based Gender Recognition Using Human-body Image with Two-step Reconstruction Network, IEEE Access, Vol. 7, pp. 104025-104044, August 2019.

 

 

< Request Form for label information of SYSU-MM01 and DGR-CNN >

 

Please complete the following form to request access to the label information of SYSU-MM01 and DGR-CNN. These files should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

40. Dongguk cGAN-based Iris Image Generation Model and Generated Images (DGIM&GI)

 

(1) Introduction

We trained generation models based on cGAN (pix2pix model) using NICE.II training dataset (selected from UBIRIS.v2) and MICHE database on visible light environment and CASIA-Iris-Distance database on NIR environment, respectively. Additionally, we generated iris images using trained generation models with each database. We made DGIM (trained generation models) and GI (generated images from trained model) open to other researchers.

 

(2) Request for DGIM&GI

To gain access to the DGIM&GI, download the following request form for DGIM&GI. Please sign and scan the request form and email to Mr. Min Beom Lee (smin6180@naver.com).

 

Any work that uses this DGIM&GI must acknowledge the authors by including the following reference.

 

Min Beom Lee, Yu Hwan Kim, and Kang Ryoung Park, Conditional Generative Adversarial Network-Based Data Augmentation for Enhancement of Iris Recognition Accuracy, IEEE Access, Vol. 7, pp. 122134-122152, September 2019.

 

< Request Form for DGIM&GI >

 

Please complete the following form to request access to the DGIM&GI. These files should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

39. Dongguk CNN and LSTM models for the classification of multiple gastrointestinal (GI) diseases, and video indices of experimental endoscopic videos

 

(1) Introduction

We trained a cascaded ResNet18 and LSTM model for classification of multiple gastrointestinal diseases by using endoscopic video data. Two different publicly available endoscopic databases [1,2] were considered for the training and validation of our proposed CNN+LSTM based model. Moreover, the trained model is also used in class prediction-based retrieval of endoscopic images. We made our trained model and video indices of experimental endoscopic videos open to other researchers.

 

1. Gastrolab The gastrointestinal site. Available online: http://www.gastrolab.net/ni.htm (accessed on 1 February 2019).

2. Pogorelov, K.; Randel, K. R.; Griwodz, C.; Eskeland, S. L.; de Lange, T.; Johansen, D.; Spampinato, C.; Dang-Nguyen, D.-T.; Lux, M.; Schmidt, P. T.; Riegler, M.; Halvorsen, P. KVASIR: A multi-class image dataset for computer aided gastrointestinal disease detection. In Proceedings of the 8th ACM Multimedia Systems Conference, Taipei, Taiwan, 2023 June 2017; pp. 164169.

 

(2) Request for our CNN+LSTM models and video indices

To gain access to the models and video indices, download the following request form for CNN+LSTM models and video indices. Please sign and scan the request form and email to Mr. Muhammad Owais (malikowais266@gmail.com).

 

Any work that uses these CNN+LSTM models and video indices must acknowledge the authors by including the following reference.

 

Muhammad Owais, Muhammad Arsalan, Jiho Choi, Tahir Mahmood, and Kang Ryoung Park, Artificial Intelligence-Based Classification of Multiple Gastrointestinal Diseases Using Endoscopy Videos for Clinical Diagnosis, Journal of Clinical Medicine, Vol.  8, Issue 7(986), pp. 1-33, July 2019.

 

< Request Form for CNN+LSTM Models and Video Indices >

 

Please complete the following form to request access to these models and video indices (All contents must be completed). These models and video indices should not be used for commercial use.

 

Name:

 

Contact:  (Email)

(Telephone)

 

Organization Name:

 

Organization Address:

 

Purpose:

 

 

Date:

 

                Name (signature)

 

 

38. Dongguk Dual-Camera-based Gaze Database (DDCG-DB1) and

CNN models with Algorithms

 

(1) Introduction

A natural gaze-detection database [Dongguk dual-camera-based gaze database (DDCG-DB1)] is constructed from the images of 26 drivers by dual near-infrared (NIR) light cameras with illuminators in a vehicle environment, and classified into nine situations such as wearing of sunglasses, different glasses, and hats with mobile phones. We make DDCG-DB1 and our CNN model trained with this database open to other researchers.

 

(2) Request for DDCG-DB1 and CNN model

To gain access to the DDCG-DB1 with CNN model, download the following request form. Please scan the request form and email to Mr. Hyo Sik Yoon (yoonhs@dongguk.edu).

Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.

 

Hyo Sik Yoon, Na Rae Baek, Noi Quang Truong, and Kang Ryoung Park, Driver Gaze Detection Based on Deep Residual Networks Using the Combined Single Image of Dual Near-Infrared Cameras, IEEE Access, Vol. 7, pp. 93448-93461, July 2019.

 

===========================================================================================================================================================================================================

 

< Request Form for DDCG-DB1 and CNN models >

 

Please complete the following form to request access to the DDCG-DB1 and CNN models (All contents must be completed). This dataset should not be used for commercial use.

 

Name :

 

Contact : (Email)

(Telephone)

 

Organization Name :

 

Organization Address :

 

Purpose :

 

 

Date :

 

               Name (signature)

 

 

 

37. Dongguk Mobile Finger-Wrinkle Database (DMFW-DB1) and CNN model with Algorithms

 

(1) Introduction

We collected the smartphone-acquired finger-wrinkle open database DMFW-DB1 using the LG V20s frontal-viewing camera (8 mega-pixels (2,160 × 3,840 pixels), 30 fps, auto-mode) from 33 people (both hands) in five different indoor environments. In addition, we trained finger-wrinkle recognition system based on ResNet-101. We make DMFW-DB1 and our CNN model trained with this database open to other researchers.

 

(2) Request for DMFW-DB1 and CNN model

To gain access to the DMFW-DB1 with CNN model, download the following request form. Please scan the request form and email to Mr. Chan Sik Kim (kimchsi90@dongguk.edu).

Any work that uses or incorporates the dataset must acknowledge the authors by including the following reference.