Skip to content

Instantly share code, notes, and snippets.

View DevMahmoud10's full-sized avatar
🏡
Work From Home

Mahmoud Ahmed DevMahmoud10

🏡
Work From Home
View GitHub Profile
@DevMahmoud10
DevMahmoud10 / levelOrderBottom.py
Created March 28, 2020 23:32
my solution from month ago and now
class Solution:
def levelOrderBottom(self, root: TreeNode) -> List[List[int]]:
if not root:return []
res=[]
q=[(root,0)]
while len(q)>0:
node,level=q.pop(0)
if len(res)<level+1:
res.append([])
res[level].append(node.val)
@DevMahmoud10
DevMahmoud10 / .gitignore
Created May 8, 2019 14:35 — forked from santoshpy/.gitignore
gitignore file for django project
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
@DevMahmoud10
DevMahmoud10 / LSTM_experiment_epochs_loss.txt
Created March 21, 2019 15:32
LSTM convergence optimization
Loss at epoch 0 = tensor(0.6932, grad_fn=<NllLossBackward>)
Loss at epoch 1000 = tensor(0.5234, grad_fn=<NllLossBackward>)
Loss at epoch 2000 = tensor(0.5206, grad_fn=<NllLossBackward>)
Loss at epoch 3000 = tensor(0.3477, grad_fn=<NllLossBackward>)
Loss at epoch 4000 = tensor(0.3470, grad_fn=<NllLossBackward>)
Loss at epoch 5000 = tensor(0.3468, grad_fn=<NllLossBackward>)
Loss at epoch 6000 = tensor(0.3467, grad_fn=<NllLossBackward>)
Loss at epoch 7000 = tensor(0.3466, grad_fn=<NllLossBackward>)
Loss at epoch 8000 = tensor(0.3466, grad_fn=<NllLossBackward>)
Loss at epoch 9000 = tensor(0.3466, grad_fn=<NllLossBackward>)