Skip to content

Instantly share code, notes, and snippets.

View ankita-anil-verma's full-sized avatar

Ankita Anil Verma ankita-anil-verma

  • @gitlab.vedams
  • BANGALORE
View GitHub Profile
@ankita-anil-verma
ankita-anil-verma / infra1 tcp dump & infra2 ping
Last active February 28, 2018 08:06
Infra 1 in OSA can communicate with external lbip but the rest of the host are not able to ping to it so i wanted to check the flow of pkts but i can't analyse tcpdump capture at infra1
root@infra2:/home/osadmin# ping <external lbvip> -c 5 -vv
PING <external lbvip> (<external lbvip>) 56(84) bytes of data.
From <gateway of internal n/w> icmp_seq=1 Destination Host Unreachable
From <gateway of internal n/w> icmp_seq=2 Destination Host Unreachable
From <gateway of internal n/w> icmp_seq=3 Destination Host Unreachable
From <gateway of internal n/w> icmp_seq=4 Destination Host Unreachable
From <gateway of internal n/w> icmp_seq=5 Destination Host Unreachable
root@infra2:/home/osadmin# tcpdump -i br-vlan host external lbvip -c 4 -vv
tcpdump: listening on br-vlan, link-type EN10MB (Ethernet), capture size 262144 bytes
@ankita-anil-verma
ankita-anil-verma / at infra nodes
Last active February 16, 2018 11:24
network bridges
root@infra2:/home/osadmin# ifconfig
01da5acb_eth0 Link encap:Ethernet HWaddr fe:53:ee:fb:98:42
inet6 addr: fe80::fc53:eeff:fefb:9842/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:930 errors:0 dropped:0 overruns:0 frame:0
TX packets:1722 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:79520 (79.5 KB) TX bytes:2323969 (2.3 MB)
01da5acb_eth1 Link encap:Ethernet HWaddr fe:6a:c9:bf:48:7f
@ankita-anil-verma
ankita-anil-verma / aptget upgrade & content of var-log-xc-cache-prep-commands.log
Created February 12, 2018 15:04
setup host,yml execution terminates with err
apt-get dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following package was automatically installed and is no longer required:
libopts25
Use 'sudo apt autoremove' to remove it.
The following packages will be upgraded:
python3-update-manager resolvconf update-manager-core
@ankita-anil-verma
ankita-anil-verma / LXC Cache
Last active March 27, 2022 05:03
on exe (openstack-ansible setup-hosts.yml --limit infra3)of setup host infra 3 fails at task ENSURE LXC CACHE HAS BEEN PREPARED
FAILED - RETRYING: Ensure that the LXC cache has been prepared (1 retries left).
fatal: [infra3]: FAILED! => {"ansible_job_id": "766365788508.23851", "attempts": 60, "changed": false, "failed": true, "finished": 0, "started": 1}
NO MORE HOSTS LEFT ***************************************************************************************************
NO MORE HOSTS LEFT ***************************************************************************************************
PLAY RECAP ***********************************************************************************************************
infra3 : ok=202 changed=80 unreachable=0 failed=1
@ankita-anil-verma
ankita-anil-verma / Weight Initialization
Created June 8, 2017 06:20
Before we can begin to train the network we have to initialize its parameters.
Small random numbers: Initialize the weights of the neurons to small numbers and refer to doing so as symmetry breaking. The idea is that the neurons are all random and unique in the beginning, so they will compute distinct updates and integrate themselves as diverse parts of the full network. The implementation for one weight matrix might look like W = 0.01* np.random.randn(D,H), where randn samples from a zero mean, unit standard deviation gaussian. With this formulation, every neuron’s weight vector is initialized as a random vector sampled from a multi-dimensional gaussian, so the neurons point in random direction in the input space. It is also possible to use small numbers drawn from a uniform distribution, but this seems to have relatively little impact on the final performance in practice.
weights = tf.Variable(tf.random_normal([784, 200], stddev=0.35) name="weights")
Warning: It’s not necessarily the case that smaller numbers will work strictly better. For example, a Neural Network layer that has very