This Gist contains three (3) sample NiFi data flow templates to illustrate how NIFI-3545 enhancement works.
This example illustrates how to use LookupRecord processor to join multiple columns from another CSV file. The key configuration is to define a result schema at 'RecordWriter' having the original columns AND the columns those are enriched by the lookuped values. Also, use 'Insert Record Fields' as LookupRecord 'Record Result Content', and '/' as 'Result Record Path' to join the lookuped values into the original record.
Given file-1.csv to lookup file-2.csv, this example outputs folloing content as a result:
Emp_Id,Name,Address,Mobile_No,Salary,Department
This example dashboard contains a Vega visualization, which renders a Christmas tree! When each node in the tree is clicked, a corresponding Kibana filter is created, so that the advent calendar blog post can be seen on the TSVB Markdown on the right hand side.
I was planning to write a post for Japanese Elastic Stack (Elasticsearch) Advent Calendar 2022, but every slot is filled. Nice! So I'm sharing it here... :)
Detailed explanation is available here.
from org.apache.nifi.processors.script import ExecuteScript | |
from org.apache.nifi.processor.io import InputStreamCallback | |
from java.io import BufferedReader, InputStreamReader | |
class ReadFirstLine(InputStreamCallback) : | |
__line = None; | |
def __init__(self) : | |
pass |
{ | |
"$schema": "https://vega.github.io/schema/vega/v5.json", | |
"description": "A basic bar chart example, with value labels shown upon mouse hover.", | |
"data": [ | |
{ | |
"name": "table", | |
"values": [ | |
{"name": "A", "current": 28, "upper": 80, "lower": 20}, | |
{"name": "B", "current": 55, "upper": 70, "lower": 30}, | |
{"name": "C", "current": 43, "upper": 60, "lower": 20}, |
NOTE: You need to specify the right 'Catalog Name', 'Schema Name' and 'Table Name' at ConvertJSONToSQL processor to get table schema correctly.
-- Source database and table
create database nifi_a;
CREATE TABLE nifi_a.dbo.Test1
(DATE_INSERT DATE,
G_INSTITUTION_ID varchar(25)
This is Logstash ruby filter example to rename field names within a specified hash, based on each value type. The original question was asked at this Elastic Discuss.
With -t
, the filter can be tested:
logstash -e "filter { ruby { path => '/{path_of_the_ruby_script}/rename_by_data_type.rb' script_params => { 'field' => 'the_name_of_target_field' } } }" -t
Output: