Skip to content

Instantly share code, notes, and snippets.

View AymenFJA's full-sized avatar
💫
Even though they criticize Secretly they fantasize

Aymen Alsaadi AymenFJA

💫
Even though they criticize Secretly they fantasize
View GitHub Profile
@AymenFJA
AymenFJA / clean_py.sh
Created August 14, 2023 14:04 — forked from hbsdev/clean_py.sh
Recursively remove all .pyc files and __pycache__ directories in the current directory.
#!/bin/sh
# recursively removes all .pyc files and __pycache__ directories in the current
# directory
find . | grep -E "(__pycache__|\.pyc$)" | xargs rm -rf
@AymenFJA
AymenFJA / bash_app_for_rp.py
Last active March 3, 2022 21:21
Prepared for Logan Ward
import radical.pilot as rp
from parsl.app.app import python_app, bash_app
@bash_app
def mpi_simulate(x: float, ptype = rp.MPI,
nproc = 3, pre_exec = []):
return './simulate {0}'.format(x)
result = mpi_simulate(2, ptype = rp.MPI, nproc = 3, pre_exec = ['cd simulate_pre_exe'])
@AymenFJA
AymenFJA / test_multi_gpu_mpi.py
Last active January 15, 2022 02:06
test_multi_gpu_mpi.py
from __future__ import print_function
'''
Basic Multi GPU computation example using TensorFlow library.
Single/Multi-GPU non-MPI Author: Aymeric Damien
Multi-GPU Large scale/Multi-node MPI Author: Aymen Alsaadi
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''
'''
This tutorial requires your machine to have 2 GPUs
@AymenFJA
AymenFJA / ICEBERG-hackathon.md
Last active July 15, 2020 19:05
Hackathon_steps

1. Login to the VM.

  • From your terminal ssh to the following IP:

    ssh user_name@149.165.156.107

2. Generate XSEDE certificate.

  • In order to be able to login to Bridges from your VM you need to generate XSEDE certificate using the following command:

    myproxy-logon -s myproxy.xsede.org -l user_name -t 72

#!/bin/bash
# Stop the MPS control daemon for each GPU and clean up /tmp
NGPUS=2 # Number of gpus with compute_capability 3.5 per server
for ((i=0; i< $NGPUS; i++))
do
echo $i
export CUDA_MPS_PIPE_DIRECTORY=/home/aymen/mps_$i
echo "quit" | nvidia-cuda-mps-control
rm -rf /home/aymen/mps_$i
rm -rf /home/aymen/mps_log_$i
#!/bin/bash
export CUDA_VISIBLE_DEVICES=0
lrank=$OMPI_COMM_WORLD_LOCAL_RANK
case ${lrank} in
[0])
export CUDA_MPS_PIPE_DIRECTORY=/home/aymen/mps_0; ./vector_add
;;
[1])
export CUDA_MPS_PIPE_DIRECTORY=/home/aymen/mps_1; ./vector_add
;;
#!/bin/bash
NGPUS=2 # Number of gpus with compute_capability 3.5 per server
# Start the MPS server for each GPU
for ((i=0; i< $NGPUS; i++))
do
mkdir /home/aymen/mps_$i
mkdir /home/aymen/mps_log_$i
export CUDA_VISIBLE_DEVICES=$i
export CUDA_MPS_PIPE_DIRECTORY=/home/aymen/mps_$i
export CUDA_MPS_LOG_DIRECTORY=/home/aymen/mps_log_$i
cnn_time img_name
10 Image1
12 Image2
14 Image3
23 Image4
30 Image5
import os
import radical.pilot as rp
from radical.entk import Pipeline, Stage, Task, AppManager
crop_size = int(360) #covnvert this to argument later
worker_root = r"/pylon5/mc3bggp/aymen/local_dir/datasets/polygon/" #convert this to argument later
weights_path = r"/pylon5/mc3bggp/aymen/local_dir/datasets/logs/ice_wedge_polygon20180823T1403/mask_rcnn_ice_wedge_polygon_0008.h5" #convert this to argument later
imgs_path = r"/pylon5/mc3bggp/aymen/local_dir/datasets/polygon/input_img/" #convert this to argument later
from myproxy.client import MyProxyClient
myproxy_clnt = MyProxyClient(hostname="myproxy.somewhere.ac.uk")
cert, private_key = myproxy_clnt.logon(username, password, bootstrap=True)