Last active
June 29, 2020 17:44
-
-
Save gose/02a73ae226adecf9b55a752ffa34b244 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
require 'elastic-workplace-search' | |
require 'json' | |
Elastic::WorkplaceSearch.access_token = 'my-access-token' | |
client = Elastic::WorkplaceSearch::Client.new | |
Elastic::WorkplaceSearch.endpoint = 'https://my-endpoint.ent-search.us-central1.gcp.cloud.es.io/api/ws/v1' | |
content_source_key = 'my-source-key' | |
documents = [] | |
id = nil | |
i = 0 | |
# https://dumps.wikimedia.org/other/cirrussearch/20200518/enwiki-20200518-cirrussearch-content.json.gz | |
gzfile = open("enwiki-20200518-cirrussearch-content.json.gz") | |
data = Zlib::GzipReader.new(gzfile) | |
data.each_line do |line| | |
parsed = JSON.parse(line) | |
if parsed['index'] && parsed['index']['_type'] == "page" && parsed['index']['_id'] | |
id = parsed['index']['_id'] | |
next | |
end | |
i += 1 | |
doc = {} | |
if parsed['title'] == nil | |
puts "Skipping line #{i} with id #{id} since the TITLE is empty." | |
next | |
end | |
doc['id'] = id | |
doc['title'] = parsed['title'] | |
doc['timestamp'] = parsed['timestamp'] | |
doc['create_timestamp'] = parsed['create_timestamp'] | |
doc['incoming_links'] = parsed['incoming_links'] | |
doc['category'] = parsed['category'] | |
doc['text'] = parsed['text'] | |
doc['text_bytes'] = parsed['text_bytes'] | |
doc['content_model'] = parsed['content_model'] | |
doc['heading'] = parsed['heading'] | |
doc['opening_text'] = parsed['opening_text'] | |
doc['popularity_score'] = parsed['popularity_score'] | |
doc['url'] = "https://en.wikipedia.org/wiki/#{doc['title'].gsub(/ /, '_')}" | |
documents << doc | |
if i % 100 == 0 | |
begin | |
document_receipts = client.index_documents(content_source_key, documents) | |
puts "Uploaded #{i} documents" | |
rescue Elastic::WorkplaceSearch::ClientException => e | |
puts e | |
end | |
documents = [] | |
end | |
end |
Increasing the timeout fixed this:
client = Elastic::WorkplaceSearch::Client.new(overall_timeout: 300)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm trying to track down what's causing this timeout. It's not consistent in when it times out. In this run, I was able to ingest ~1,154,000 before it timed out. In the previous run, it timed out after ~74,000 documents.
I'm ingesting an English Wikipedia export into Workplace Search. It contains ~6M documents (see comment in code for src).
Screenshots of the cluster size are attached.