Skip to content

Instantly share code, notes, and snippets.

@thomasaarholt
Last active November 16, 2017 12:35
Show Gist options
  • Save thomasaarholt/2b99382b21575d61dd2c9aa4591155dd to your computer and use it in GitHub Desktop.
Save thomasaarholt/2b99382b21575d61dd2c9aa4591155dd to your computer and use it in GitHub Desktop.
Result after cuprismatic crashing when only running on GPU
[I 13:34:12.255 NotebookApp] Adapting to protocol v5.1 for kernel 879c9343-88fe-43f2-a9d7-9416223907b7
COMPILED FOR GPU
Simulation parameters:
=====================
Algorithm: PRISM
interpolationFactorX = 4
interpolationFactorY = 4
filenameAtoms = SI100.XYZ
filenameOutput = tmp.mrc
realspacePixelSize[0] = 0.1
realspacePixelSize[1] = 0.1
potBound = 1
numFP = 5
sliceThickness = 2
E0 = 300000
alphaBeamMax = 0.024
numThreads = 0
batchSizeTargetCPU = 1
batchSizeTargetGPU = 2
probeStepX = 0.25
probeStepY = 0.25
cellDim[0] = 20
cellDim[1] = 20
cellDim[2] = 20
tileX = 3
tileY = 3
tileZ = 1
probeDefocus = 0
C3 = 0
C5 = 0
probeSemiangle = 0.021
detectorAngleStep = 0.001
probeXtilt = 0
probeYtilt = 0
scanWindowXMin = 0.4
scanWindowXMax = 0.6
scanWindowYMin = 0.4
scanWindowYMax = 0.6
integrationAngleMin = 0
integrationAngleMax = 0.001
randomSeed = 0
includeOccupancy = true
includeThermalEffects = true
alsoDoCPUWork = true
save2DOutput = false
save3DOutput = true
save4DOutput = false
numGPUs = 4
numStreamsPerGPU = 3
alsoDoCPUWork = 1
earlyCPUStopCount = 100
Data Transfer Mode : Auto
Formatting
Execution plan: PRISM
Estimated potential array size = 29573120
Estimated buffer memory needed = 53231616
meta.numStreamsPerGPU*2*batch_size*imageSize[0]*imageSize[1]= 4435968
Available GPU memory = 3579219148
Estimated GPU memory usage for single transfer method = 802170880
Using GPU codes
Using single transfer method
extracted 8 atoms from 10 lines in SI100.XYZ
tiledCellDim[0]= 5.43
f_x = 16
f_y = 16
tiledCellDim[1] = 16.29
tiledCellDim[2] = 16.29
(f_y * round((tiledCellDim[1]) / meta.realspacePixelSize[0] / f_y) = 160
_imageSize[0] = 160
_imageSize[1] = 160
prism_pars.pixelSize[1] = 0.101812
prism_pars.pixelSize[0] = 0.101812
Warning: User requested 4 GPUs but only 1 were found. Proceeding with 1.
deviceProperties.major = 5
deviceProperties.maxThreadsPerBlock = 1024
targetNumBlocks = 64
Simulation parameters:
=====================
Algorithm: PRISM
interpolationFactorX = 4
interpolationFactorY = 4
filenameAtoms = SI100.XYZ
filenameOutput = tmp.mrc
realspacePixelSize[0] = 0.1
realspacePixelSize[1] = 0.1
potBound = 1
numFP = 5
sliceThickness = 2
E0 = 300000
alphaBeamMax = 0.024
numThreads = 0
batchSizeTargetCPU = 1
batchSizeTargetGPU = 2
probeStepX = 0.25
probeStepY = 0.25
cellDim[0] = 5.43
cellDim[1] = 5.43
cellDim[2] = 5.43
tileX = 3
tileY = 3
tileZ = 1
probeDefocus = 0
C3 = 0
C5 = 0
probeSemiangle = 0.021
detectorAngleStep = 0.001
probeXtilt = 0
probeYtilt = 0
scanWindowXMin = 0.4
scanWindowXMax = 0.6
scanWindowYMin = 0.4
scanWindowYMax = 0.6
integrationAngleMin = 0
integrationAngleMax = 0.001
randomSeed = 0
includeOccupancy = true
includeThermalEffects = true
alsoDoCPUWork = true
save2DOutput = false
save3DOutput = true
save4DOutput = false
numGPUs = 1
numStreamsPerGPU = 3
alsoDoCPUWork = 1
earlyCPUStopCount = 100
Data Transfer : Single Transfer
Entering PRISM01_calcPotential
Waiting for threads...
Entering PRISM02_calcSMatrix
Computing compact S matrix
Launching GPU worker on stream #0 of GPU #0
Computing Plane Wave #0Launching GPU worker on stream #1/ of GPU #069
Launching GPU worker on stream #2Computing Plane Wave #6 of GPU #Computing Plane Wave #120//
6969
Computing Plane Wave #18Computing Plane Wave #24Computing Plane Wave #30//6969
/
69
Computing Plane Wave #36Computing Plane Wave #42/Computing Plane Wave #48//696969
Computing Plane Wave #54/Computing Plane Wave #60/Computing Plane Wave #6669
69
/69
GPU worker on stream #0GPU worker on stream #2GPU worker on stream #1 of GPU #0 of GPU #0 of GPU #0finished
finished
finished
[I 13:34:26.053 NotebookApp] KernelRestarter: restarting kernel (1/5)
kernel 879c9343-88fe-43f2-a9d7-9416223907b7 restarted
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment