Skip to content

Instantly share code, notes, and snippets.

@nyck33
Created September 20, 2023 12:43
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save nyck33/9fc952340a1f97d21f33b6ef042a367f to your computer and use it in GitHub Desktop.
Save nyck33/9fc952340a1f97d21f33b6ef042a367f to your computer and use it in GitHub Desktop.
morpheus simple Python and abp_nvsmi same problem
```
(base) nyck33@nyck33-ubuntu2304:~/Documents/morpheus$ docker exec -it morpheus_container /bin/bash
(morpheus) root@72c4e3958688:/workspace# ls
CHANGELOG.md CONTRIBUTING.md LICENSE README.md docker docs examples models morpheus scripts
(morpheus) root@72c4e3958688:/workspace# cd morpheus
(morpheus) root@72c4e3958688:/workspace/morpheus# export MORPHEUS_ROOT=$(pwd)
# Launch Morpheus printing debug messages
morpheus --log_level=DEBUG \
`# Run a pipeline with 8 threads and a model batch size of 32 (Must be equal or less than Triton config)` \
run --num_threads=8 --pipeline_batch_size=1024 --model_max_batch_size=1024 \
`# Specify a NLP pipeline with 256 sequence length (Must match Triton config)` \
pipeline-fil --columns_file=${MORPHEUS_ROOT}/morpheus/data/columns_fil.txt \
`# 1st Stage: Read from file` \
from-file --filename=examples/data/nvsmi.jsonlines \
`# 2nd Stage: Deserialize from JSON strings to objects` \
deserialize \
`# 3rd Stage: Preprocessing converts the input data into BERT tokens` \
preprocess \
`# 4th Stage: Send messages to Triton for inference. Specify the model loaded in Setup` \
inf-triton --model_name=abp-nvsmi-xgb --server_url=localhost:8000 \
`# 5th Stage: Monitor stage prints throughput information to the console` \
monitor --description "Inference Rate" --smoothing=0.001 --unit inf \
`# 6th Stage: Add results from inference to the messages` \
add-class \
`# 7th Stage: Convert from objects back into strings. Ignore verbose input data` \
serialize --include 'mining' \
`# 8th Stage: Write out the JSON lines to the detections.jsonlines file` \
to-file --filename=detections.jsonlines --overwrite
Configuring Pipeline via CLI
Loaded columns. Current columns: [['nvidia_smi_log.gpu.fb_memory_usage.used', 'nvidia_smi_log.gpu.fb_memory_usage.free', 'nvidia_smi_log.gpu.utilization.gpu_util', 'nvidia_smi_log.gpu.utilization.memory_util', 'nvidia_smi_log.gpu.temperature.gpu_temp', 'nvidia_smi_log.gpu.temperature.gpu_temp_max_threshold', 'nvidia_smi_log.gpu.temperature.gpu_temp_slow_threshold', 'nvidia_smi_log.gpu.power_readings.power_draw', 'nvidia_smi_log.gpu.clocks.graphics_clock', 'nvidia_smi_log.gpu.clocks.sm_clock', 'nvidia_smi_log.gpu.clocks.mem_clock', 'nvidia_smi_log.gpu.applications_clocks.graphics_clock', 'nvidia_smi_log.gpu.applications_clocks.mem_clock', 'nvidia_smi_log.gpu.default_applications_clocks.graphics_clock', 'nvidia_smi_log.gpu.default_applications_clocks.mem_clock', 'nvidia_smi_log.gpu.max_clocks.graphics_clock', 'nvidia_smi_log.gpu.max_clocks.sm_clock', 'nvidia_smi_log.gpu.max_clocks.mem_clock']]
Starting pipeline via CLI... Ctrl+C to Quit
Config:
{
"ae": null,
"class_labels": [
"mining"
],
"debug": false,
"edge_buffer_size": 128,
"feature_length": 18,
"fil": {
"feature_columns": [
"nvidia_smi_log.gpu.fb_memory_usage.used",
"nvidia_smi_log.gpu.fb_memory_usage.free",
"nvidia_smi_log.gpu.utilization.gpu_util",
"nvidia_smi_log.gpu.utilization.memory_util",
"nvidia_smi_log.gpu.temperature.gpu_temp",
"nvidia_smi_log.gpu.temperature.gpu_temp_max_threshold",
"nvidia_smi_log.gpu.temperature.gpu_temp_slow_threshold",
"nvidia_smi_log.gpu.power_readings.power_draw",
"nvidia_smi_log.gpu.clocks.graphics_clock",
"nvidia_smi_log.gpu.clocks.sm_clock",
"nvidia_smi_log.gpu.clocks.mem_clock",
"nvidia_smi_log.gpu.applications_clocks.graphics_clock",
"nvidia_smi_log.gpu.applications_clocks.mem_clock",
"nvidia_smi_log.gpu.default_applications_clocks.graphics_clock",
"nvidia_smi_log.gpu.default_applications_clocks.mem_clock",
"nvidia_smi_log.gpu.max_clocks.graphics_clock",
"nvidia_smi_log.gpu.max_clocks.sm_clock",
"nvidia_smi_log.gpu.max_clocks.mem_clock"
]
},
"log_config_file": null,
"log_level": 10,
"mode": "FIL",
"model_max_batch_size": 1024,
"num_threads": 8,
"pipeline_batch_size": 1024,
"plugins": []
}
CPP Enabled: True
====Registering Pipeline====
W20230920 12:30:53.805799 71 thread.cpp:137] unable to set memory policy - if using docker use: --cap-add=sys_nice to allow membind
====Building Pipeline====
Inference Rate: 0 inf [00:00, ? inf/s]====Building Pipeline Complete!====
Starting! Time: 1695213053.8081644
====Registering Pipeline Complete!====
====Starting Pipeline====
E20230920 12:30:53.819872 71 builder_definition.cpp:283] Exception during segment initializer. Segment name: linear_segment_0, Segment Rank: 0. Exception message:
RuntimeError: Unable to connect to Triton at 'localhost:8000'. Check the URL and port and ensure the server is running.
At:
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/stages/inference/triton_inference_stage.py(952): _get_cpp_inference_node
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/stages/inference/inference_stage.py(261): _build_single
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/single_port_stage.py(84): _build
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py(325): build
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py(347): build
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py(347): build
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py(347): build
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py(258): inner_build
====Pipeline Started====
E20230920 12:30:53.820327 71 controller.cpp:64] exception caught while performing update - this is fatal - issuing kill
====Building Segment: linear_segment_0====
Added source: <from-file-0; FileSourceStage(filename=examples/data/nvsmi.jsonlines, iterative=False, file_type=FileTypes.Auto, repeat=1, filter_null=True, parser_kwargs=None)>
└─> morpheus.MessageMeta
Added stage: <deserialize-1; DeserializeStage(ensure_sliceable_index=True)>
└─ morpheus.MessageMeta -> morpheus.MultiMessage
E20230920 12:30:53.820606 71 context.cpp:124] rank: 0; size: 1; tid: 139992821134912; fid: 0x7f529804b300: set_exception issued; issuing kill to current runnable. Exception msg: RuntimeError: Unable to connect to Triton at 'localhost:8000'. Check the URL and port and ensure the server is running.
At:
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/stages/inference/triton_inference_stage.py(952): _get_cpp_inference_node
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/stages/inference/inference_stage.py(261): _build_single
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/single_port_stage.py(84): _build
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py(325): build
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py(347): build
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py(347): build
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py(347): build
/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py(258): inner_build
E20230920 12:30:53.820641 71 manager.cpp:89] error detected on controller
E20230920 12:30:53.820709 56 runner.cpp:189] Runner::await_join - an exception was caught while awaiting on one or more contexts/instances - rethrowing
Added stage: <preprocess-fil-2; PreprocessFILStage()>
└─ morpheus.MultiMessage -> morpheus.MultiInferenceFILMessage
Inference Rate: 0 inf [00:00, ? inf/s]
E20230920 12:30:53.822510 56 service.cpp:136] mrc::service: service was not joined before being destructed; issuing join
Exception occurred in pipeline. Rethrowing
Traceback (most recent call last):
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 329, in join
await self._mrc_executor.join_async()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 258, in inner_build
stage.build(builder)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py", line 347, in build
dep.build(builder, do_propagate=do_propagate)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py", line 347, in build
dep.build(builder, do_propagate=do_propagate)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py", line 347, in build
dep.build(builder, do_propagate=do_propagate)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py", line 325, in build
out_ports_pair = self._build(builder=builder, in_ports_streams=in_ports_pairs)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/single_port_stage.py", line 84, in _build
return [self._build_single(builder, in_ports_streams[0])]
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/stages/inference/inference_stage.py", line 261, in _build_single
node = self._get_cpp_inference_node(builder)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/stages/inference/triton_inference_stage.py", line 952, in _get_cpp_inference_node
return _stages.InferenceClientStage(builder,
RuntimeError: Unable to connect to Triton at 'localhost:8000'. Check the URL and port and ensure the server is running.
Traceback (most recent call last):
File "/opt/conda/envs/morpheus/bin/morpheus", line 11, in <module>
====Pipeline Complete====
sys.exit(run_cli())
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/cli/run.py", line 20, in run_cli
cli(obj={}, auto_envvar_prefix='MORPHEUS', show_default=True, prog_name="morpheus")
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1720, in invoke
return _process_result(rv)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1657, in _process_result
value = ctx.invoke(self._result_callback, value, **ctx.params)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/decorators.py", line 33, in new_func
return f(get_current_context(), *args, **kwargs)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/cli/commands.py", line 626, in post_pipeline
pipeline.run()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 606, in run
asyncio.run(self.run_async())
File "/opt/conda/envs/morpheus/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/opt/conda/envs/morpheus/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 584, in run_async
await self.join()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 329, in join
await self._mrc_executor.join_async()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 258, in inner_build
stage.build(builder)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py", line 347, in build
dep.build(builder, do_propagate=do_propagate)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py", line 347, in build
dep.build(builder, do_propagate=do_propagate)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py", line 347, in build
dep.build(builder, do_propagate=do_propagate)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/stream_wrapper.py", line 325, in build
out_ports_pair = self._build(builder=builder, in_ports_streams=in_ports_pairs)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/single_port_stage.py", line 84, in _build
return [self._build_single(builder, in_ports_streams[0])]
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/stages/inference/inference_stage.py", line 261, in _build_single
node = self._get_cpp_inference_node(builder)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/stages/inference/triton_inference_stage.py", line 952, in _get_cpp_inference_node
return _stages.InferenceClientStage(builder,
RuntimeError: Unable to connect to Triton at 'localhost:8000'. Check the URL and port and ensure the server is running.
(morpheus) root@72c4e3958688:/workspace/morpheus# export MORPHEUS_ROOT=$(pwd)
morpheus --log_level=DEBUG \
run --num_threads=8 --pipeline_batch_size=1024 --model_max_batch_size=1024 \
pipeline-fil --columns_file=${MORPHEUS_ROOT}/morpheus/data/columns_fil.txt \
from-file --filename=examples/data/nvsmi.jsonlines \
deserialize \
preprocess \
inf-triton --model_name=abp-nvsmi-xgb --server_url=192.168.2.101:8000 \
monitor --description "Inference Rate" --smoothing=0.001 --unit inf \
add-class \
serialize --include 'mining' \
to-file --filename=detections.jsonlines --overwrite
Configuring Pipeline via CLI
Loaded columns. Current columns: [['nvidia_smi_log.gpu.fb_memory_usage.used', 'nvidia_smi_log.gpu.fb_memory_usage.free', 'nvidia_smi_log.gpu.utilization.gpu_util', 'nvidia_smi_log.gpu.utilization.memory_util', 'nvidia_smi_log.gpu.temperature.gpu_temp', 'nvidia_smi_log.gpu.temperature.gpu_temp_max_threshold', 'nvidia_smi_log.gpu.temperature.gpu_temp_slow_threshold', 'nvidia_smi_log.gpu.power_readings.power_draw', 'nvidia_smi_log.gpu.clocks.graphics_clock', 'nvidia_smi_log.gpu.clocks.sm_clock', 'nvidia_smi_log.gpu.clocks.mem_clock', 'nvidia_smi_log.gpu.applications_clocks.graphics_clock', 'nvidia_smi_log.gpu.applications_clocks.mem_clock', 'nvidia_smi_log.gpu.default_applications_clocks.graphics_clock', 'nvidia_smi_log.gpu.default_applications_clocks.mem_clock', 'nvidia_smi_log.gpu.max_clocks.graphics_clock', 'nvidia_smi_log.gpu.max_clocks.sm_clock', 'nvidia_smi_log.gpu.max_clocks.mem_clock']]
Starting pipeline via CLI... Ctrl+C to Quit
Config:
{
"ae": null,
"class_labels": [
"mining"
],
"debug": false,
"edge_buffer_size": 128,
"feature_length": 18,
"fil": {
"feature_columns": [
"nvidia_smi_log.gpu.fb_memory_usage.used",
"nvidia_smi_log.gpu.fb_memory_usage.free",
"nvidia_smi_log.gpu.utilization.gpu_util",
"nvidia_smi_log.gpu.utilization.memory_util",
"nvidia_smi_log.gpu.temperature.gpu_temp",
"nvidia_smi_log.gpu.temperature.gpu_temp_max_threshold",
"nvidia_smi_log.gpu.temperature.gpu_temp_slow_threshold",
"nvidia_smi_log.gpu.power_readings.power_draw",
"nvidia_smi_log.gpu.clocks.graphics_clock",
"nvidia_smi_log.gpu.clocks.sm_clock",
"nvidia_smi_log.gpu.clocks.mem_clock",
"nvidia_smi_log.gpu.applications_clocks.graphics_clock",
"nvidia_smi_log.gpu.applications_clocks.mem_clock",
"nvidia_smi_log.gpu.default_applications_clocks.graphics_clock",
"nvidia_smi_log.gpu.default_applications_clocks.mem_clock",
"nvidia_smi_log.gpu.max_clocks.graphics_clock",
"nvidia_smi_log.gpu.max_clocks.sm_clock",
"nvidia_smi_log.gpu.max_clocks.mem_clock"
]
},
"log_config_file": null,
"log_level": 10,
"mode": "FIL",
"model_max_batch_size": 1024,
"num_threads": 8,
"pipeline_batch_size": 1024,
"plugins": []
}
W20230920 12:31:16.390398 97 thread.cpp:137] unable to set memory policy - if using docker use: --cap-add=sys_nice to allow membind
CPP Enabled: True
====Registering Pipeline====
====Building Pipeline====
Inference Rate: 0 inf [00:00, ? inf/s]====Building Pipeline Complete!====
Starting! Time: 1695213076.3922594
====Registering Pipeline Complete!====
====Starting Pipeline====
====Building Segment: linear_segment_0====
Added source: <from-file-0; FileSourceStage(filename=examples/data/nvsmi.jsonlines, iterative=False, file_type=FileTypes.Auto, repeat=1, filter_null=True, parser_kwargs=None)>
└─> morpheus.MessageMeta
Added stage: <deserialize-1; DeserializeStage(ensure_sliceable_index=True)>
└─ morpheus.MessageMeta -> morpheus.MultiMessage
Added stage: <preprocess-fil-2; PreprocessFILStage()>
└─ morpheus.MultiMessage -> morpheus.MultiInferenceFILMessage
Added stage: <inference-3; TritonInferenceStage(model_name=abp-nvsmi-xgb, server_url=192.168.2.101:8000, force_convert_inputs=False, use_shared_memory=False)>
└─ morpheus.MultiInferenceFILMessage -> morpheus.MultiResponseMessage
Added stage: <monitor-4; MonitorStage(description=Inference Rate, smoothing=0.001, unit=inf, delayed_start=False, determine_count_fn=None, log_level=LogLevels.INFO)>
└─ morpheus.MultiResponseMessage -> morpheus.MultiResponseMessage
====Pipeline Started====
Added stage: <add-class-5; AddClassificationsStage(labels=(), prefix=, probs_type=TypeId.BOOL8, threshold=0.5)>
└─ morpheus.MultiResponseMessage -> morpheus.MultiResponseMessage
Added stage: <serialize-6; SerializeStage(include=('mining',), exclude=(), fixed_columns=True)>
└─ morpheus.MultiResponseMessage -> morpheus.MessageMeta
Added stage: <to-file-7; WriteToFileStage(filename=detections.jsonlines, overwrite=True, file_type=FileTypes.Auto, include_index_col=True, flush=False)>
└─ morpheus.MessageMeta -> morpheus.MessageMeta
====Building Segment Complete!====
E20230920 12:31:16.792425 108 context.cpp:124] /linear_segment_0/from-file-0; rank: 0; size: 1; tid: 140626456720960: set_exception issued; issuing kill to current runnable. Exception msg: CUDF failure at:/opt/conda/conda-bld/work/cpp/src/io/json/json_tree.cu:264: JSON Parser encountered an invalid format at location 8
Inference Rate[Complete]: 0 inf [00:00, ? inf/s]E20230920 12:31:16.793745 82 runner.cpp:189] Runner::await_join - an exception was caught while awaiting on one or more contexts/instances - rethrowing
E20230920 12:31:16.793892 82 segment_instance.cpp:270] segment::SegmentInstance - an exception was caught while awaiting on one or more nodes - rethrowing
E20230920 12:31:16.793922 82 pipeline_instance.cpp:225] pipeline::PipelineInstance - an exception was caught while awaiting on segments - rethrowing
Inference Rate[Complete]: 0 inf [00:00, ? inf/s]
E20230920 12:31:16.794489 82 service.cpp:136] mrc::service: service was not joined before being destructed; issuing join
Exception occurred in pipeline. Rethrowing
Traceback (most recent call last):
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 329, in join
await self._mrc_executor.join_async()
RuntimeError: CUDF failure at:/opt/conda/conda-bld/work/cpp/src/io/json/json_tree.cu:264: JSON Parser encountered an invalid format at location 8
====Pipeline Complete====
Traceback (most recent call last):
File "/opt/conda/envs/morpheus/bin/morpheus", line 11, in <module>
sys.exit(run_cli())
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/cli/run.py", line 20, in run_cli
cli(obj={}, auto_envvar_prefix='MORPHEUS', show_default=True, prog_name="morpheus")
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1720, in invoke
return _process_result(rv)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 1657, in _process_result
value = ctx.invoke(self._result_callback, value, **ctx.params)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/click/decorators.py", line 33, in new_func
return f(get_current_context(), *args, **kwargs)
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/cli/commands.py", line 626, in post_pipeline
pipeline.run()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 606, in run
asyncio.run(self.run_async())
File "/opt/conda/envs/morpheus/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/opt/conda/envs/morpheus/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 584, in run_async
await self.join()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 329, in join
await self._mrc_executor.join_async()
RuntimeError: CUDF failure at:/opt/conda/conda-bld/work/cpp/src/io/json/json_tree.cu:264: JSON Parser encountered an invalid format at location 8
(morpheus) root@72c4e3958688:/workspace/morpheus# nvidia-smi
Wed Sep 20 12:31:32 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.86.05 Driver Version: 535.86.05 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce GTX 1650 Off | 00000000:01:00.0 Off | N/A |
| N/A 42C P8 2W / 50W | 123MiB / 4096MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
(morpheus) root@72c4e3958688:/workspace/morpheus# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
(morpheus) root@72c4e3958688:/workspace/morpheus# ls
CHANGELOG.md README.md detections.jsonlines models setup.cfg
CMakeLists.txt ci docker morpheus setup.py
CONTRIBUTING.md cmake docs morpheus.code-workspace tests
LICENSE cufile.log examples pyproject.toml thirdparty
MANIFEST.in dependencies.yaml external scripts versioneer.py
(morpheus) root@72c4e3958688:/workspace/morpheus# cd examples
(morpheus) root@72c4e3958688:/workspace/morpheus/examples# ls
CMakeLists.txt data gnn_fraud_detection_pipeline root_cause_analysis
README.md developer_guide log_parsing sid_visualization
abp_nvsmi_detection digital_fingerprinting nlp_si_detection
abp_pcap_detection doca ransomware_detection
(morpheus) root@72c4e3958688:/workspace/morpheus/examples# cd developer_guide
(morpheus) root@72c4e3958688:/workspace/morpheus/examples/developer_guide# ls
1_simple_python_stage 2_1_real_world_phishing 2_2_rabbitmq 3_simple_cpp_stage 4_rabbitmq_cpp_stage
(morpheus) root@72c4e3958688:/workspace/morpheus/examples/developer_guide# cd 1_simple_python_stage
(morpheus) root@72c4e3958688:/workspace/morpheus/examples/developer_guide/1_simple_python_stage# ls
pass_thru.py run.py
(morpheus) root@72c4e3958688:/workspace/morpheus/examples/developer_guide/1_simple_python_stage# python run.py
====Registering Pipeline====
W20230920 12:32:36.498116 144 thread.cpp:137] unable to set memory policy - if using docker use: --cap-add=sys_nice to allow membind
====Building Pipeline====
Progress: 0 messages [00:00, ? messages/s]====Building Pipeline Complete!====
Starting! Time: 1695213156.4988692
====Registering Pipeline Complete!====
====Starting Pipeline====
====Building Segment: linear_segment_0====
Added source: <from-file-0; FileSourceStage(filename=/workspace/morpheus/examples/data/email_with_addresses.jsonlines, iterative=False, file_type=FileTypes.Auto, repeat=1, filter_null=True, parser_kwargs=None)>
└─> morpheus.MessageMeta
Added stage: <pass-thru-1; PassThruStage()>
└─ morpheus.MessageMeta -> morpheus.MessageMeta
Added stage: <monitor-2; MonitorStage(description=Progress, smoothing=0.05, unit=messages, delayed_start=False, determine_count_fn=None, log_level=LogLevels.INFO)>
└─ morpheus.MessageMeta -> morpheus.MessageMeta
====Building Segment Complete!====
====Pipeline Started====
E20230920 12:32:36.870592 146 context.cpp:124] /linear_segment_0/from-file-0; rank: 0; size: 1; tid: 139987628578368: set_exception issued; issuing kill to current runnable. Exception msg: CUDF failure at:/opt/conda/conda-bld/work/cpp/src/io/json/json_tree.cu:264: JSON Parser encountered an invalid format at location 8
E20230920 12:32:36.883885 129 runner.cpp:189] Runner::await_join - an exception was caught while awaiting on one or more contexts/instances - rethrowing
Progress[Complete]: 0 messages [00:00, ? messages/s]E20230920 12:32:36.884295 129 segment_instance.cpp:270] segment::SegmentInstance - an exception was caught while awaiting on one or more nodes - rethrowing
E20230920 12:32:36.884330 129 pipeline_instance.cpp:225] pipeline::PipelineInstance - an exception was caught while awaiting on segments - rethrowing
Progress[Complete]: 0 messages [00:00, ? messages/s]
E20230920 12:32:36.885149 129 service.cpp:136] mrc::service: service was not joined before being destructed; issuing join
Exception occurred in pipeline. Rethrowing
Traceback (most recent call last):
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 329, in join
await self._mrc_executor.join_async()
RuntimeError: CUDF failure at:/opt/conda/conda-bld/work/cpp/src/io/json/json_tree.cu:264: JSON Parser encountered an invalid format at location 8
====Pipeline Complete====
Traceback (most recent call last):
File "/workspace/morpheus/examples/developer_guide/1_simple_python_stage/run.py", line 55, in <module>
run_pipeline()
File "/workspace/morpheus/examples/developer_guide/1_simple_python_stage/run.py", line 51, in run_pipeline
pipeline.run()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 606, in run
asyncio.run(self.run_async())
File "/opt/conda/envs/morpheus/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/opt/conda/envs/morpheus/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 584, in run_async
await self.join()
File "/opt/conda/envs/morpheus/lib/python3.10/site-packages/morpheus/pipeline/pipeline.py", line 329, in join
await self._mrc_executor.join_async()
RuntimeError: CUDF failure at:/opt/conda/conda-bld/work/cpp/src/io/json/json_tree.cu:264: JSON Parser encountered an invalid format at location 8
(morpheus) root@72c4e3958688:/workspace/morpheus/examples/developer_guide/1_simple_python_st(morpheus) root@72c4e3958688:/workspace/morpheus/examples/developer_guide/1_simple_python_st(morpheus) root@72c4e3(morpheus) root@72c4e3958688:/workspace/morpheus/examples/developer_guide/1_simple_python_stage#
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment