Skip to content

Instantly share code, notes, and snippets.

@wpf500
Created January 9, 2015 17:29
Show Gist options
  • Save wpf500/27adc2491ab453736845 to your computer and use it in GitHub Desktop.
Save wpf500/27adc2491ab453736845 to your computer and use it in GitHub Desktop.
For AWS CloudWatch Logs. Finds the log streams with in a log group that contain log events for a given time and display the relevant logs
#!/usr/bin/python
import sys, boto.logs, time
from datetime import datetime
log_group = 'LOG GROUP NAME'
timestamp = int(datetime.strptime(sys.argv[1], '%Y-%m-%dT%H:%M:%S').strftime('%s')) * 1000
logs = boto.logs.connect_to_region('eu-west-1')
if len(sys.argv) > 2:
streams = sys.argv[2:]
else:
def find_streams(token, timestamp):
def valid_stream(stream):
return stream['creationTime'] < timestamp and stream.get('lastIngestionTime') > timestamp
data = logs.describe_log_streams(log_group_name=log_group, next_token=token)
streams = filter(valid_stream, data['logStreams'])
if len(streams) > 0 or 'nextToken' not in data:
return streams
time.sleep(0.5) # rate limiting
print >> sys.stderr, data['logStreams'][-1]['logStreamName']
return find_streams(data['nextToken'], timestamp)
streams = [stream['logStreamName'] for stream in find_streams(None, timestamp)]
for stream in streams:
print >> sys.stderr, '============== %s ==============' % stream
def find_events(token, last_token, timestamp):
def valid_event(event):
return event['timestamp'] > timestamp and event['timestamp'] < timestamp + 60 * 60 * 1000
data = logs.get_log_events(log_group_name=log_group, log_stream_name=stream, start_from_head=True, next_token=token)
for event in filter(valid_event, data['events']):
print event['message']
if data['nextForwardToken'] != last_token:
time.sleep(0.5) # rate limiting
find_events(data['nextForwardToken'], token, timestamp)
find_events(None, None, timestamp)
@wpf500
Copy link
Author

wpf500 commented Jan 9, 2015

You need to follow the instructions here: http://aws.amazon.com/developers/getting-started/python/

Usage:

./log-search.py <datetime>
./log-search.py <datetime> <stream names...>

<datetime> has an exact format, e.g.:
2014-10-30T12:34:56
2015-01-05T11:11:11

Use the latter command if you already know the stream names you want to search (saves having to look them up)

@shailendersingh2
Copy link

My requirement is to read continuously Cloudwatch log group with latest entries and sent it to other application like polling but your script is not showing any result and stay with following output. I have already configured group name inside script and if i don't provide stream name then it directly fails so help on it would be appreciated.
#./log-search.py 2017-10-05T07:26:43 db-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
============== db-XXXXXXXXXXXXXXXXXXXXXXXXXXX ==============

And in another case it given this error:

============== db-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx ==============
Traceback (most recent call last):
File "./log-search.py", line 44, in
find_events(None, None, timestamp)
File "./log-search.py", line 35, in find_events
data = logs.get_log_events(log_group_name=log_group, log_stream_name=stream, start_from_head=True, next_token=token)
File "/usr/local/lib/python2.7/dist-packages/boto/logs/layer1.py", line 412, in get_log_events
body=json.dumps(params))
File "/usr/local/lib/python2.7/dist-packages/boto/logs/layer1.py", line 566, in make_request
response_body = response.read().decode('utf-8')
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 410, in read
self._cached_response = http_client.HTTPResponse.read(self)
File "/usr/lib/python2.7/httplib.py", line 596, in read
s = self._safe_read(self.length)
File "/usr/lib/python2.7/httplib.py", line 703, in _safe_read
chunk = self.fp.read(min(amt, MAXAMOUNT))
File "/usr/lib/python2.7/socket.py", line 384, in read
data = self._sock.recv(left)
File "/usr/lib/python2.7/ssl.py", line 756, in recv
return self.read(buflen)
File "/usr/lib/python2.7/ssl.py", line 643, in read
v = self._sslobj.read(len)
ssl.SSLError: ('The read operation timed out',)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment