Skip to content

Instantly share code, notes, and snippets.

Last active Feb 10, 2017
What would you like to do?
""" Dump a set of user-friendly text files outlining all our ElasticSearch type mappings """
import json
type_props = []
def add_prop_and_remove_non_en_i18n(prop):
""" Do some extra logic to remove superfluous i18n fields that aren't 'en' """
if 'i18n' in prop and 'i18n.en' not in prop:
def handle_props(props, prefix=''):
for prop, mapping in props.items():
prop = prefix + prop
if 'properties' in mapping: # nested type, recursively get its properties too..
handle_props(mapping['properties'], prop + '.')
def handle_type(mapping):
if 'properties' in mapping:
if __name__ == '__main__':
# mappings.txt is a file with JSON in it from querying ES for its mappings
# e.g. curl -XGET 'http://localhost:9200/_mapping?pretty=true' > mappings.txt
with open('mappings.txt') as f:
mappings = json.load(f)
for name, mapping in mappings['atom']['mappings'].items():
fh = open('type_mappings/' + name + '.txt', 'w')
fh.write('%s\n-----\n' % name)
for prop in sorted(type_props):
fh.write(prop + '\n')
type_props = []
Copy link

MikeFE commented Feb 10, 2017

If the index you're pulling from isn't named "atom" you'll need to edit line 32 to reflect that. Further, this script is assuming a folder called type_mappings is created under the ./ (current working directory) so it can dump the text files there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment