To extend a logical volume after increasing the disk size, lets say AWS EBS:
sudo pvresize /dev/sdb
sudo lvextend -l +100%FREE /dev/vgName/lvName
sudo resize2fs /dev/vgName/lvName
To extend a logical volume after increasing the disk size, lets say AWS EBS:
sudo pvresize /dev/sdb
sudo lvextend -l +100%FREE /dev/vgName/lvName
sudo resize2fs /dev/vgName/lvName
To create a ssh key:
ssh-keygen -f ~/.ssh/keyName_rsa -t rsa -b 4096
Then, upload it to AWS Secret Manager to be used as Terraform key_name when creating instances for example:
aws secretsmanager create-secret --name "Path/To/KeyName" --description "Added Manually" --secret-string "$(cat ~/.ssh/keyName_rsa.pub)"
If you are getting the following error while indexing:
{
"error": {
"root_cause": [
{
"type": "cluster_block_exception",
"reason": "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
}
To get information about disk usage on every elasticsearch node:
curl -XGET "http://localhost:9200/_cat/allocation?v&pretty"
To show all collections for the specific database:
mongo <dbName> --quiet --eval "db.getCollectionNames().join('\n')”
If you want to know on which nodes your pods are running, use this alias:
konode='k get pod -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName'
To download all documents from an index, first you need to create a search ID:
curl -X POST "localhost:9200/<indexName>/_search?scroll=5m&pretty&size=10000"
This will keep a search session for 5 minutes
Then, send another query to scroll through the pages of data until there is no more:
curl -X GET "localhost:9200/_search/scroll?pretty&scroll=5m&scroll_id=<searchIdFromThePreviousStep>"
If you want to export specific collections from a specific database, you can use this bash loop:
for collection in \
$(mongo <databaseName> --quiet --eval "rs.slaveOk(); db.getCollectionNames().join('\n')" | grep <collectionPrefix) ; \
do mongoexport --collection=$collection --db=<databaseName> --out=$collection ; \
done
Use this to query the disk usage of your elasticsearch cluster:
curl -XGET "http://localhost:9200/_cat/allocation?v&pretty"
Output:
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
3213 460.2gb 584.9gb 1.3tb 1.9tb 29 1.1.1.1 1.1.1.1 host1.example.com
3213 479.5gb 599.3gb 1.3tb 1.9tb 29 1.1.1.2 1.1.1.2 host2.example.com
If you want to know when and who changed a specific file in your Github repo:
git log --follow -- <fileName>