Skip to content

Instantly share code, notes, and snippets.

@cprima
Last active May 11, 2022 12:33
Show Gist options
  • Save cprima/0293e279b4e896ad51abdd6f4c0849f1 to your computer and use it in GitHub Desktop.
Save cprima/0293e279b4e896ad51abdd6f4c0849f1 to your computer and use it in GitHub Desktop.
Elastic Logstash Kibana UiPath Studio Community Execution.log local

UiPath Studio Execution.log ingestion to Elastic

UiPath Studio ─────► Filebeat ─────► Logstash ─────► Elastic ──────► kibana

This gist contains 4 config files for elastic, tested with version 8.2 on Windows.

  • elastic.yml: no modifications but prior to publishing I had set network.host: 127.0.0.1 to a proper IPv4 address
  • kibana.yml: no modifications
  • filebeat.yml: enabled the first input, set output to logstash, uncommented setup.kibana
  • logstash.yml: this is the major part of this gist:
    • splits the incoming line into time, loglevel, jsonstring
    • ingest the jsonstring as json
    • removes fields
    • resets the @timestamp field to something from the json

Will ingest such lines:

20:07:55.7614 Info {"message":"BlankProcess execution started","level":"Information","logType":"Default","timeStamp":"2022-05-05T20:07:55.7551414+02:00","fingerprint":"5d2143c3-64a2-435b-8d0c-2b9732196f94","windowsIdentity":"WORKGROUP\\user","machineName":"client","fileName":"Main","initiatedBy":"Studio","processName":"BlankProcess","processVersion":"1.0.0","jobId":"473e70dc-d2ee-4009-af9b-22db8f2923f7","robotName":"rob01","machineId":7654321,"organizationUnitId":1234567}
20:07:57.0361 Trace {"message":"Initializing settings...","level":"Trace","logType":"User","timeStamp":"2022-05-05T20:07:57.0361775+02:00","fingerprint":"51090979-56b0-40ec-95b7-b5b4f1368bce","windowsIdentity":"WORKGROUP\\user","machineName":"client","fileName":"InitAllSettings","processName":"BlankProcess","processVersion":"1.0.0","jobId":"473e70dc-d2ee-4009-af9b-22db8f2923f7","robotName":"rob01","machineId":7654321,"organizationUnitId":1234567}
20:07:57.5225 Trace {"message":"[OrchestratorQueueName, ProcessABCQueue]\r\n[OrchestratorQueueFolder, ]\r\n[logF_BusinessProcessName, Framework]\r\n[MaxRetryNumber, 0]\r\n[MaxConsecutiveSystemExceptions, 0]\r\n[ExScreenshotsFolderPath, Exceptions_Screenshots]\r\n[LogMessage_GetTransactionData, Processing Transaction Number: ]\r\n[LogMessage_GetTransactionDataError, Error getting transaction data for Transaction Number: ]\r\n[LogMessage_Success, Transaction Successful.]\r\n[LogMessage_BusinessRuleException, Business rule exception.]\r\n[LogMessage_ApplicationException, System exception.]\r\n[ExceptionMessage_ConsecutiveErrors, The maximum number of consecutive system exceptions was reached. ]\r\n[RetryNumberGetTransactionItem, 2]\r\n[RetryNumberSetTransactionStatus, 2]\r\n[ShouldMarkJobAsFaulted, False]","level":"Trace","logType":"User","timeStamp":"2022-05-05T20:07:57.5225678+02:00","fingerprint":"95905515-1123-4b0c-a05d-bc33ad2b39d3","windowsIdentity":"WORKGROUP\\user","machineName":"client","fileName":"Main","foo":"bar","logVersion":"v4","processName":"BlankProcess","processVersion":"1.0.0","jobId":"473e70dc-d2ee-4009-af9b-22db8f2923f7","robotName":"rob01","machineId":7654321,"organizationUnitId":1234567}
20:07:57.5255 Trace {"message":"[OrchestratorQueueName, ProcessABCQueue]\r\n[OrchestratorQueueFolder, ]\r\n[logF_BusinessProcessName, Framework]\r\n[MaxRetryNumber, 0]\r\n[MaxConsecutiveSystemExceptions, 0]\r\n[ExScreenshotsFolderPath, Exceptions_Screenshots]\r\n[LogMessage_GetTransactionData, Processing Transaction Number: ]\r\n[LogMessage_GetTransactionDataError, Error getting transaction data for Transaction Number: ]\r\n[LogMessage_Success, Transaction Successful.]\r\n[LogMessage_BusinessRuleException, Business rule exception.]\r\n[LogMessage_ApplicationException, System exception.]\r\n[ExceptionMessage_ConsecutiveErrors, The maximum number of consecutive system exceptions was reached. ]\r\n[RetryNumberGetTransactionItem, 2]\r\n[RetryNumberSetTransactionStatus, 2]\r\n[ShouldMarkJobAsFaulted, False]","level":"Trace","logType":"User","timeStamp":"2022-05-05T20:07:57.5255568+02:00","fingerprint":"c6010783-c368-4e31-b50f-8fa7fe06ac71","windowsIdentity":"WORKGROUP\\user","machineName":"client","fileName":"Main","foo":"bar","processName":"BlankProcess","processVersion":"1.0.0","jobId":"473e70dc-d2ee-4009-af9b-22db8f2923f7","robotName":"rob01","machineId":7654321,"organizationUnitId":1234567}
20:07:57.5255 Info {"message":"BlankProcess execution ended","level":"Information","logType":"Default","timeStamp":"2022-05-05T20:07:57.5280823+02:00","fingerprint":"ccea9910-f398-4cfb-b4aa-4205bff7a6ba","windowsIdentity":"WORKGROUP\\user","machineName":"client","fileName":"Main","totalExecutionTimeInSeconds":1,"totalExecutionTime":"00:00:01","foo":"bar","processName":"BlankProcess","processVersion":"1.0.0","jobId":"473e70dc-d2ee-4009-af9b-22db8f2923f7","robotName":"rob01","machineId":7654321,"organizationUnitId":1234567}

Download for Windows (no installation, no Admin permissions needed)

Extract and run in 4 PowerShell windows:

.\bin\elasticsearch.bat
.\bin\logstash.bat -f .\config\logstash.conf
.\filebeat.exe -e -c .\filebeat.yml
.\bin\kibana.bat```
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
# Before you set out to tweak and tune the configuration, make sure you
# understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
network.host: 127.0.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false
#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 03-05-2022 19:46:21
#
# --------------------------------------------------------------------------------
# Enable security features
xpack.security.enabled: false
xpack.security.enrollment.enabled: false
# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
enabled: false
keystore.path: certs/http.p12
# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
enabled: false
verification_mode: certificate
keystore.path: certs/transport.p12
truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["BOREAS"]
# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0
# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
###################### Filebeat Configuration Example #########################
# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: filestream
# Unique ID among all inputs, an ID is required.
id: my-uipath-execution-log-id
enabled: true
paths:
- C:\Users\cpm\AppData\Local\UiPath\Logs\*Execution.log
fields:
foo: bar
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
reload.period: 60s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 1
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
host: "localhost:5601"
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
# username: "elastic"
# password: "changeme"
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
#####
# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html
# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false
# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "kibana_system"
#elasticsearch.password: "pass"
# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
# elasticsearch.serviceAccountToken: "my_token"
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024
# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug
# Enables you to specify a file where Kibana stores log output.
#logging.appenders.default:
# type: file
# fileName: /var/logs/kibana.log
# layout:
# type: json
# Logs queries sent to Elasticsearch.
#logging.loggers:
# - name: elasticsearch.query
# level: debug
# Logs http responses.
#logging.loggers:
# - name: http.server.response
# level: debug
# Logs system usage information.
#logging.loggers:
# - name: metrics.ops
# level: debug
# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data
# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"
# =================== Frequently used (Optional)===================
# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.
# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000
# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb
# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15
# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
#data.autocomplete.valueSuggestions.timeout: 1000
# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#data.autocomplete.valueSuggestions.terminateAfter: 100000
# This section was automatically generated during setup.
elasticsearch.hosts: ['http://127.0.0.1:9200']
elasticsearch.serviceAccountToken: changeme
elasticsearch.ssl.certificateAuthorities: ['D:\opt\kibana-8.2.0\data\ca_1651608399877.crt']
#xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: ['https://10.38.20.11:9200'], ca_trusted_fingerprint: 7251473f688a429cf3958d9234d7932a7bd795d33315ca23442062242cbfaeb6}]
# Sample Logstash configuration for creating a simple
# Beats->Logstash->Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
filter {
dissect {
mapping => {
"message" => "%{time} %{loglevel} %{jsonstring}"
}
}
json {
source => "[jsonstring]"
target => "data"
}
mutate {
remove_field => ["jsonstring", "message"]
}
date {
match => ["[data][timeStamp]", "ISO8601"]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "uipath-%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
# stdout {
# codec => rubydebug
# }
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment