Install the mapr-data-access-gateway package.
Edit this file to enable HTTP (https enabled by default):
/opt/mapr/data-access-gateway/conf/properties.cfg
Screen : rowType = RecordType(ANY name, ANY phone_number): rowcount = 1.0, cumulative cost = {50003.1 rows, 100007.1 cpu, 1.441792E9 io, 0.0 network, 0.0 memory}, id = 2389 | |
00-01 Project(name=[$1], phone_number=[$0]) : rowType = RecordType(ANY name, ANY phone_number): rowcount = 1.0, cumulative cost = {50003.0 rows, 100007.0 cpu, 1.441792E9 io, 0.0 network, 0.0 memory}, id = 2388 | |
00-02 SelectionVectorRemover : rowType = RecordType(ANY phone_number, ANY name): rowcount = 1.0, cumulative cost = {50002.0 rows, 100005.0 cpu, 1.441792E9 io, 0.0 network, 0.0 memory}, id = 2387 | |
00-03 Limit(fetch=[1]) : rowType = RecordType(ANY phone_number, ANY name): rowcount = 1.0, cumulative cost = {50001.0 rows, 100004.0 cpu, 1.441792E9 io, 0.0 network, 0.0 memory}, id = 2386 | |
00-04 Scan(table=[[dfs.default, /apps/crm]], groupscan=[JsonTableGroupScan [ScanSpec=JsonScanSpec [tableName=maprfs:///apps/crm, condition=(phone_number MATCHES "^.*\\Q1234\\E.*$")], columns=[`phone_number`, `name`], maxwidth |
Stroke classification model, for under water side | |
class 4a: Freestyle Under Water Side | |
class 4b: Butterfly Under Water Side | |
class 4c: Backstroke Under Water Side | |
class 4d: Breaststroke Under Water Side | |
Rank classification model, for freestyle under water side: | |
class 4a1: Freestyle Under Water Side Rank 1 | |
class 4a2: Freestyle Under Water Side Rank 2 | |
class 4a3: Freestyle Under Water Side Rank 3 |
[ODBC] | |
Trace=yes | |
Tracefile=/tmp/trace.txt | |
[ODBC Data Sources] | |
MapR Drill 64-bit=MapR Drill ODBC Driver 64-bit | |
[drill64] | |
# This key is not necessary and is only to give a description of the data source. | |
Description=MapR Drill ODBC Driver (64-bit) DSN |
2018-06-15 05:52:51,943 File tail to OpenTSDB/FiletailtoOpenTSDBa650441b-3c96-4088-b4e5-c37baee144f9 ERROR An exception occurred while running the pipeline, java.lang.NullPointerException ProductionPipelineRunnable *admin ProductionPipelineRunnable-FiletailtoOpenTSDBa650441b-3c96-4088-b4e5-c37baee144f9-File tail to OpenTSDB | |
java.lang.NullPointerException | |
at java.io.Reader.<init>(Reader.java:78) | |
at java.io.InputStreamReader.<init>(InputStreamReader.java:113) | |
at com.streamsets.pipeline.lib.parser.DataParserFactory.createReader(DataParserFactory.java:83) | |
at com.streamsets.pipeline.lib.parser.json.JsonDataParserFactory.getParser(JsonDataParserFactory.java:45) | |
at com.streamsets.pipeline.lib.parser.WrapperDataParserFactory.getParser(WrapperDataParserFactory.java:65) | |
at com.streamsets.pipeline.stage.processor.http.HttpProcessor.parseResponse(HttpProcessor.java:299) | |
at com.streamsets.pipeline.stage.processor.http.HttpProcessor.processResponse(HttpProcessor.java:270) | |
at com.streamsets.pipeline.stage.processor |
bash-4.3$ export MAPR_MEP_VERSION=4 | |
bash-4.3$ export MAPR_VERSION=6.0.0 | |
bash-4.3$ export SDC_CONF=/etc/sdc/ | |
bash-4.3$ export SDC_HOME=/opt/streamsets-datacollector-3.2.0.0/ | |
bash-4.3$ /opt/streamsets-datacollector-3.2.0.0/bin/streamsets setup-mapr | |
+ BLACKLIST_PROP=system.stagelibs.blacklist | |
+ PROP_FILENAME=sdc.properties | |
+ POLICY_FILENAME=sdc-security.policy | |
+ MAPR_LIB_BASE=streamsets-datacollector-mapr | |
+ MAPR_SPARK_LIB='streamsets-datacollector-mapr_spark*-lib' |
FROM docker.artifactory/pacc_nvidia:cuda-9.0-base | |
RUN apt-get update -y && \ | |
apt-get install -y build-essential libopenblas-dev liblapack-dev \ | |
libopencv-dev cuda-command-line-tools-9-0 \ | |
cuda-cublas-dev-9-0 \ | |
cuda-cudart-dev-9-0 \ | |
cuda-cufft-dev-9-0 \ | |
cuda-curand-dev-9-0 \ | |
cuda-cusolver-dev-9-0 \ |
Pipeline Status: RUNNING_ERROR: com.streamsets.pipeline.api.base.OnRecordErrorException: HTTP_01 - Error fetching resource. Status: 400 Reason: Bad Request {"error":{"code":400,"message":"One or more data points had errors","details":"Please see the TSD logs or append \"details\" to the put request","trace":"net.opentsdb.tsd.BadRequestException: One or more data points had errors\n\tat net.opentsdb.tsd.PutDataPointRpc$1GroupCB.call(PutDataPointRpc.java:601) [tsdb-2.4.0RC1.jar:a64bcee]\n\tat net.opentsdb.tsd.PutDataPointRpc.processDataPoint(PutDataPointRpc.java:664) [tsdb-2.4.0RC1.jar:a64bcee]\n\tat net.opentsdb.tsd.PutDataPointRpc.execute(PutDataPointRpc.java:278) [tsdb-2.4.0RC1.jar:a64bcee]\n\tat net.opentsdb.tsd.RpcHandler.handleHttpQuery(RpcHandler.java:283) [tsdb-2.4.0RC1.jar:a64bcee]\n\tat net.opentsdb.tsd.RpcHandler.messageReceived(RpcHandler.java:134) [tsdb-2.4.0RC1.jar:a64bcee]\n\tat org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) [netty-3.9.4. |
{"paragraphs":[{"text":"%md # Forest Fire Prediction through KMeans Clustering\n<img src=\"https://surveymonkey-assets.s3.amazonaws.com/survey/121135814/6a48257c-8996-4aa6-ba56-6b1e373385c3.png\" width=100 hspace=\"20\" style=\"float: right;\">\nThe United States Forest Service provides datasets that describe forest fires that have occurred in Canada and the United States since year 2000. We can predict where forest fires are prone to occur by partitioning the locations of past burns into clusters whose centroids can be used to optimally place heavy fire fighting equipment as near as possible to where fires are likely to occur.\n\nDataset:\nhttps://fsapps.nwcg.gov/gisdata.php\n","user":"anonymous","dateUpdated":"2017-10-24T17:06:43+0000","config":{"colWidth":12,"enabled":true,"results":{},"editorSetting":{"language":"markdown","editOnDblClick":true},"editorMode":"ace/mode/markdown","editorHide":true,"tableHide":false},"settings":{"params":{},"forms":{}},"results":{"code":"SUCCESS","msg":[{"type":"HTML","data |
nvidia@tegra-ubuntu:~/nd4j$ mvn clean install | |
[INFO] Scanning for projects... | |
[INFO] ------------------------------------------------------------------------ | |
[INFO] Reactor Build Order: | |
[INFO] | |
[INFO] nd4j | |
[INFO] nd4j-shade | |
[INFO] jackson | |
[INFO] nd4j-common | |
[INFO] nd4j-context |