Skip to content

Instantly share code, notes, and snippets.

@sohumsen
Created Jan 29, 2021
Embed
What would you like to do?
MedTech01.ipynb
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "MedTech01.ipynb",
"provenance": [],
"collapsed_sections": [
"SWEwgJrHUtEl",
"yHOIHPEVJ2Q-"
],
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/sohumsen/5772d4ab21c1c39ac2eda47c2ba03f35/medtech01.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XPN3cpZZbV-I"
},
"source": [
"# Deep Learning in Medical Imaging: A series by MedTech UCL\n",
"*by Liam Chalcroft*"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "HrIzjho29-lD"
},
"source": [
"**This article is part of a series highlighting the uses of deep learning in radiology and medical imaging. [To view the full series click here](https://uclmed.tech/medtech-portal/)**"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "SWEwgJrHUtEl"
},
"source": [
"## Introduction"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BtxX58zM-8Ew"
},
"source": [
"The broad field of machine learning has gained remarkable traction in medicine in recent years - searching the term \"machine learning\" on PubMed returns over 15,000 hits for 2020, compared to just over 700 in 2010. This is even more pronounced for the sub-field of deep learning, with hits rising from 600 in 2016 to 9,000 in 2020. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mXnANgzBInTK"
},
"source": [
"This article series will assume some basic knowledge of what terms such as 'deep learning' and 'neural network' mean - to understand some of these starting concepts there are many great resources to refer to such as [this YouTube series](https://www.youtube.com/watch?v=aircAruvnKk). \n",
"\n",
"In the next few articles we will be working mostly with convolutional neural networks (CNNs) and will cover some background on how these work. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "a9j5ea3AInz-"
},
"source": [
"Deep learning has found use in a number of different medical applications, ranging from [drug discovery](https://doi.org/10.1016/j.drudis.2018.01.039) to [analysis of electronic health records](https://www.nature.com/articles/s41746-018-0029-1/). Of these different applications, computer vision technologies - which can learn representations and patterns from visual data - have shown huge potential for use in medical image analysis. This has been demonstrated in a whole range of specialties and types of images, as discussed in a [recent review paper](https://www.nature.com/articles/s41746-020-00376-2), however in this series we will be looking in particular to the applications of this to radiology."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "y0xkozYtJfir"
},
"source": [
"The field of radiology itself has revolutionised medicine since its inception, with the widespread availability of technology such as Ultrasound, MRI and CT now making it possible to see inside the body non-invasively to aid in clinical decisions. Deep learning has been used to enhance this technology in a number of ways, ranging from improving aspects of the scanner itself (such as higher quality or faster acquisition), to automating steps of the time-consuming analysis process or even producing automatic diagnoses and prognoses. This is a rapidly growing field and there have been a number of developments that have shown the capability of these deep learning methods to match and even outperform clinicians ([here](https://www.nature.com/articles/s41591-018-0107-6) and [here](https://doi.org/10.1117/1.JMI.7.5.055501))!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oLUo8moHQ35m"
},
"source": [
"In this series we will explore a number of these different uses of deep learning, and for each of these we will try it out in an interactive tutorial. Links to the following articles will be available here:\n",
"* Classification - Deep Learning for prediction of COVID-19 risk in clinical scans\n",
"* Registration - Deep Learning for alignment of intra- and inter-subject brain scans\n",
"* Segmentation - Deep Learning for labelling of tumours in brain scans\n",
"* Super-resolution - Creating high resolution brain images from low resolution clinical scans\n",
"* Image reconstruction - Removing artifacts from sparsely sampled MRI images\n",
"* Knowing what we know - Interpretability and quantifying uncertainty "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "u54EBS2LwrwK"
},
"source": [
"All articles will provide step-by-step examples that can be reproduced on any system with Python installed. The simplest way to get started is through the Google Colab links, which run the code remotely so you don't have to install anything.\n",
"\n",
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1xRSkjqHzymzjSG6-uXuZkqtCiNGwqYbi?usp=sharing \"MedTech Deep Learning #1\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "tIkK6ZAkKF04"
},
"source": [
"For this first article we will begin by understanding the most important component of any analysis - the data!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "GcfC_X-sUwBv"
},
"source": [
"## Medical Imaging Data"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AgqdvMa4SPCu"
},
"source": [
"Medical images can either be a 2-D image (e.g. an X-Ray) or a 3-D volume (e.g. MRI, CT). In both cases, 2-D images are reconstructed within the scanner from the raw signal and these are saved as DICOM (Digital Imaging and Communications in Medicine) files, and for 3-D volumes these can be stacked together as 2-D slices through an object. For this series we will be working with 3-D images in the NIfTI (Neuroimaging Informatics Technology Initiative) format."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xZwl08SdDdIS"
},
"source": [
"We'll begin by loading some data from the [TCIA (The Cancer Imaging Archive)](https://www.cancerimagingarchive.net/) database. The following few cells are just downloading files and converting from DICOM to NIfTi, so feel free to run them and skip ahead."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "yHOIHPEVJ2Q-"
},
"source": [
"### File loading"
]
},
{
"cell_type": "code",
"metadata": {
"id": "wQzuDnd4Dpy1"
},
"source": [
"! mkdir MedTech01"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "LhQxA028Ent5"
},
"source": [
"! pip -q install dicom2nifti"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "5DQLeurQD_LR"
},
"source": [
"import requests\n",
"import dicom2nifti\n",
"import os"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "9lm1d0JGgTk4"
},
"source": [
"Here we are setting up a simple function to request files for download from the database API, REST:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "QcST-kP4EEGh"
},
"source": [
"def download_url(url, save_path, chunk_size=128):\n",
" r = requests.get(url, stream=True)\n",
" if r.status_code == 200:\n",
" print('Request successful, code', r.status_code)\n",
" with open(save_path, 'wb') as fd:\n",
" for chunk in r.iter_content(chunk_size=chunk_size):\n",
" fd.write(chunk)\n",
" else:\n",
" print('Request unsuccessful, code', r.status_code)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "oyE7al_ZSEjl"
},
"source": [
"Images are downloaded with ease by selecting their specific Series UID. Feel free to [browse here](https://nbia.cancerimagingarchive.net/nbia-search/) and try it out for yourself."
]
},
{
"cell_type": "code",
"metadata": {
"id": "_Y8dk-xUEIW0"
},
"source": [
"ct_brain_url = 'https://services.cancerimagingarchive.net/services/v4/TCIA/query/getImage?SeriesInstanceUID=1.3.6.1.4.1.14519.5.2.1.7009.2402.882136884134365981035682566340'\n",
"mri_brain_url = 'https://services.cancerimagingarchive.net/services/v4/TCIA/query/getImage?SeriesInstanceUID=1.3.6.1.4.1.14519.5.2.1.7009.2402.327122726537459238654047774771'\n",
"ct_lung_url = 'https://services.cancerimagingarchive.net/services/v4/TCIA/query/getImage?SeriesInstanceUID=1.2.840.113704.1.111.9304.1171412289.27'\n",
"pet_wb_url = 'https://services.cancerimagingarchive.net/services/v4/TCIA/query/getImage?SeriesInstanceUID=1.3.6.1.4.1.14519.5.2.1.3320.3273.148598530571851121885405469098'\n",
"\n",
"ct_brain_zip = os.path.join('./MedTech01/ct_brain_dicom.zip')\n",
"mri_brain_zip = os.path.join('./MedTech01/mri_brain_dicom.zip')\n",
"ct_lung_zip = os.path.join('./MedTech01/ct_lung_dicom.zip')\n",
"pet_wb_zip = os.path.join('./MedTech01/pet_wb_dicom.zip')"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "6OXaELhGEWfv"
},
"source": [
"download_url(url=ct_brain_url,save_path=ct_brain_zip)\n",
"download_url(url=mri_brain_url,save_path=mri_brain_zip)\n",
"download_url(url=ct_lung_url,save_path=ct_lung_zip)\n",
"download_url(url=pet_wb_url,save_path=pet_wb_zip)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "nzXtFdUFEYVK"
},
"source": [
"! unzip -q ./MedTech01/ct_brain_dicom.zip -d ./MedTech01/ct_brain_dicom/\n",
"! unzip -q ./MedTech01/mri_brain_dicom.zip -d ./MedTech01/mri_brain_dicom/\n",
"! unzip -q ./MedTech01/ct_lung_dicom.zip -d ./MedTech01/ct_lung_dicom/\n",
"! unzip -q ./MedTech01/pet_wb_dicom.zip -d ./MedTech01/pet_wb_dicom/"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "L3OTjA31gfA7"
},
"source": [
"We now must stack the 2D DICOM images together to create 3D volumes. To do this we use the *dicom2nifti* library:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "wp-GinFCEh8M"
},
"source": [
"dicom2nifti.dicom_series_to_nifti('./MedTech01/ct_brain_dicom/', './MedTech01/ct_brain.nii', reorient_nifti=True)\n",
"dicom2nifti.dicom_series_to_nifti('./MedTech01/mri_brain_dicom/', './MedTech01/mri_brain.nii', reorient_nifti=True)\n",
"dicom2nifti.dicom_series_to_nifti('./MedTech01/ct_lung_dicom/', './MedTech01/ct_lung.nii', reorient_nifti=True)\n",
"dicom2nifti.dicom_series_to_nifti('./MedTech01/pet_wb_dicom/', './MedTech01/pet_wb.nii', reorient_nifti=True)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "ph2W7Xs9J-GB"
},
"source": [
"### File analysis"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "C0kHpumPUmf-"
},
"source": [
"We now have a folder of 3-D CT scans of lungs, from patients with and without COVID-19. Lets take a look!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "OX4oksIfxDiF"
},
"source": [
"As we are working in Python, we will need to install/import libraries that are built to help us in certain tasks. We will be getting to know several of these over the series, but for now we will stick to the few that we need for visualising our scans."
]
},
{
"cell_type": "code",
"metadata": {
"id": "vs-eX10v0ws2"
},
"source": [
"import nibabel as nib\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "UmifRIqVUmY3"
},
"source": [
"*nibabel* is a package designed for reading/writing of medical images, and can handle a whole host of different file formats - including our NIfTi files. *matplotlib* is a popular Python library for graph-plotting, and we will use this for visualising our scans. We will be converting our files to a *numpy* array datatype, so *numpy* library is required for editing these arrays."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "lpLXNLYNUqd4"
},
"source": [
"Lets load our four scans using *nibabel*'s *load* function:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "gmYifzWzUhPo"
},
"source": [
"img_ct_brain = nib.load('./MedTech01/ct_brain.nii')\n",
"img_mri_brain = nib.load('./MedTech01/mri_brain.nii')\n",
"img_ct_lung = nib.load('./MedTech01/ct_lung.nii')\n",
"img_pet_wb = nib.load('./MedTech01/pet_wb.nii')"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "gprh1MBNVNDr"
},
"source": [
"In addition to the image slices, NIfTi files contain extra information about a scan such as spacing and orientation in a header. Lets take a look at these for our brain CT:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ZFvV4ldc1SVt"
},
"source": [
"print('Brain CT info: \\n', img_ct_brain.header)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "WNUVsrMZVx31"
},
"source": [
"From the 'pixdim' row above we can see that both the scan was acquired with roughly 0.5mm resolution in the sagittal plane (i.e. the x and y), and 2.5mm thickness between slices.\n",
"\n",
"The rest of this information isn't too important now but will be useful when we look later at how to align two scans in something known as registration."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BL3EoUHxWOcW"
},
"source": [
"Now lets take a look at our images! To do this we need to load the image into a numpy array using the *get_fdata* function:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "sh1fDG0DWRg-"
},
"source": [
"vol_ct_brain = img_ct_brain.get_fdata()\n",
"vol_mri_brain = img_mri_brain.get_fdata()\n",
"vol_ct_lung = img_ct_lung.get_fdata()\n",
"vol_pet_wb = img_pet_wb.get_fdata()"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "C5Or5Cydjh-i"
},
"source": [
"print(' CT Brain scan size : ', vol_ct_brain.shape,\n",
" '\\n MRI Brain scan size: ', vol_mri_brain.shape,\n",
" '\\n CT Lung scan size: ', vol_ct_lung.shape,\n",
" '\\n PET whole-body scan size: ', vol_pet_wb.shape)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "TUHC2MlTqTGz"
},
"source": [
"We can see that our images are indeed different sizes. For visualising we can simply navigate through slices manually, however for a lot of analyses it is important to have images aligned as best as possible via image registration."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "3RmImx2Yc79r"
},
"source": [
"Let's start by checking a single slice of each of the images. We can plot these side-by-side using the *subplot* feature of *matplotlib*:"
]
},
{
"cell_type": "code",
"metadata": {
"id": "FsSyuYKrgcog"
},
"source": [
"plt.figure(figsize=(14,14))\n",
"plt.subplot(2,3,1)\n",
"plt.imshow(vol_ct_brain[:,:,40], cmap='gray')\n",
"plt.title('Brain CT')\n",
"plt.subplot(2,3,2)\n",
"plt.imshow(vol_mri_brain[:,:,20], cmap='gray')\n",
"plt.title('Brain MRI')\n",
"plt.subplot(2,3,3)\n",
"plt.imshow(vol_ct_lung[:,:,200], cmap='gray')\n",
"plt.title('Lung CT')\n",
"plt.subplot(2,1,2)\n",
"plt.imshow(vol_pet_wb[:,100,:], cmap='jet')\n",
"plt.title('Whole-body PET')\n",
"plt.show()"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "usNp40EiuVCr"
},
"source": [
"We can see that the CT brain scan in particular suffers from being quite dull, which we can improve by clipping intensity to ignore background values like air. Lets try a range of (0, 150):"
]
},
{
"cell_type": "code",
"metadata": {
"id": "ZZX6tvoBv5pf"
},
"source": [
"vol_ct_brain_clipped = np.clip(vol_ct_brain, 0, 150)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "te9cDGTXN6CH"
},
"source": [
"plt.figure(figsize=(14,14))\n",
"plt.subplot(2,2,1)\n",
"plt.imshow(vol_ct_brain[:,:,30], cmap='gray')\n",
"plt.title('Brain CT (original)')\n",
"plt.subplot(2,2,2)\n",
"plt.imshow(vol_ct_brain_clipped[:,:,30], cmap='gray')\n",
"plt.title('Brain CT (clipped)')\n",
"plt.show()"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "L2E3uAIxOWcb"
},
"source": [
"We can use a handy feature of Jupyter notebooks called widgets to visualise these scans interactively. Using Google Colab means we're limited in which we can use but there are plenty out there like *itkwidgets* that can give even more control."
]
},
{
"cell_type": "code",
"metadata": {
"id": "GuOenkcHjavH",
"cellView": "form"
},
"source": [
"#@title Slice Viewer { run: \"auto\", vertical-output: true }\n",
"#@markdown Double-click here to view the code used. Slide the bar to navigate through slices\n",
"CT_Brain = 11 #@param {type:\"slider\", min:1, max:63, step:1}\n",
"MRI_Brain = 2 #@param {type:\"slider\", min:1, max:32, step:1}\n",
"CT_Lung = 174 #@param {type:\"slider\", min:1, max:341, step:1}\n",
"PET_Wholebody = 105 #@param {type:\"slider\", min:1, max:200, step:1}\n",
"\n",
"plt.figure(figsize=(12,12))\n",
"plt.subplot(2,3,1)\n",
"plt.imshow(vol_ct_brain_clipped[:,:,CT_Brain-1], cmap='gray')\n",
"plt.title('Brain CT')\n",
"plt.subplot(2,3,2)\n",
"plt.imshow(vol_mri_brain[:,:,MRI_Brain-1], cmap='gray')\n",
"plt.title('Brain MRI')\n",
"plt.subplot(2,3,3)\n",
"plt.imshow(vol_ct_lung[:,:,CT_Lung-1], cmap='gray')\n",
"plt.title('Lung CT')\n",
"plt.subplot(2,1,2)\n",
"plt.imshow(vol_pet_wb[:,PET_Wholebody-1,:], cmap='jet')\n",
"plt.title('Whole-body PET')\n",
"plt.show()"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "PQlFiBE-YQV0"
},
"source": [
"## Conclusions"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "48nYVlwdUmWF"
},
"source": [
"There we have it - we have successfully accessed data from a public archive, processed the raw files into 3D volumes and created interactive visualisations of our data. Understanding the format of our data will help us understand how to pre-process for the different tasks we will take on in this series.\n",
"\n",
"In the next article we will be looking at the use of convolutional neural networks (CNNs) to predict risk of COVID-19 in X-Ray chest scans using [Covid-NET](https://www.nature.com/articles/s41598-020-76550-z).\n",
"\n",
"Thank you for reading and see you next time!"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "jxuZYsp7DgrF"
},
"source": [
"### References:\n",
"\n",
"Available as links in the text."
]
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment