Here are some notes and experiments relating to the Deep Dive Series. See also the indepth documentation at the Colfax Cluster Get Started Page.
- At the time of writing this (Aug 2017), I only have visibility to a cluster of Knights Landing Cluster nodes - with Xeon Phi 7210 host processors with no coprocessors.
- From what I gather, it is likely as we move towards 2018, we may have Knights Mill coprocessors attached to these Xeon Phi Knights Landing host processors.
- Due to lack of visibility of coprocessors at this point, we may find some of the coprocessors related exercises not applicable (e.g. offloading some codes from Xeon Phi host to its coprocessors). It may however still worth watching the videos for these parts to get a feel how we may achieve parallelization with offloading.
- Intel Colfax Cluster - How to submit a job to Colfax HPC Cluster Nodes - Option 1 (direct command)
- Intel Colfax Cluster - How to submit a job to Colfax HPC Cluster Nodes - Option 2 (via shell script)
- Intel Colfax Cluster - How to SSH from Login Node to Cluster Node Temporarily
- Intel Colfax Cluster - How many nodes are available in the Colfax Cluster - pbsnodes
- Intel Colfax Cluster - How to compile (C, C++, Fortran) codes with Intel Parallel Studio XE
- Intel Colfax Cluster - How to interactively submit qsub job on a Xeon-phi (Knights Landing) enabled Cluster Node?
- Intel Colfax Cluster - How to visualize Knights Landing (knl) NUMA Nodes and High Bandwidth Memory modes
- Intel Colfax Cluster - How to run an application in High Bandwidth Mode (HBM) on Xeon Phi (Knights Landing) enabled Cluster Node
External Links for info:
External Links for info:
- Intel Colfax Cluster - Write And Run a Simple Parallel Application On Xeon Phi (Knights Landing) Cluster Node: handy boiler plate
- Intel Colfax Cluster - Parallel a For Loop Application On Xeon Phi (Knights Landing) Cluster Node: handy boiler plate
- Intel Colfax Cluster - Perform 18 billion billion operations on Xeon Phi (Knights Landing) Cluster Node in Sub-millisecond: a mini research experiment.
- Intel Colfax Cluster - Distributed Computing and Parallel Programming - Hello World Application: handy boiler plate
- Intel Colfax Cluster - Optimize a Numerical Integration Implementation with Parallel Programming and Distributed Computing: a mini research experiment.
External Links for info:
- (External Link) MPI_COMM_WORLD
- (External Link) MPI Routine
- (External Link) MPI_Init
- (External Link) MPI Tutorials
- Intel Colfax Cluster - Estimate Theoretical Peak FLOPS for Intel Xeon Phi Processors
- High Performance Computing (HPC) Running Intel Xeon Phi: N-body Simulation Example
- more to add here...
- more to add here...
- more to add here...
- more to add here...
- Deep Dive Series
- Deep Dive HOW To Series GitHub
- C++ Pointer and Reference
- EMACS Nebie Key Reference
- EMACS - go to line
- EMACS - Cheatsheet