Skip to content

Instantly share code, notes, and snippets.

@jwmatthews
Last active September 21, 2024 14:54
Show Gist options
  • Save jwmatthews/a60f2a36b5691b466d9964386c61d04b to your computer and use it in GitHub Desktop.
Save jwmatthews/a60f2a36b5691b466d9964386c61d04b to your computer and use it in GitHub Desktop.
(env) [jmatthews] in ~/git/jwmatthews/kai (response_metadata) $ make run-server
if [[ "$(uname)" -eq "Darwin" ]] ; then export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES ; fi ;\
PYTHONPATH="/Users/jmatthews/git/jwmatthews/kai/kai:" python kai/server.py
[2024-09-21 09:33:19 -0400] [18875] [INFO] Starting gunicorn 22.0.0
[2024-09-21 09:33:19 -0400] [18875] [INFO] Listening at: http://0.0.0.0:8080 (18875)
[2024-09-21 09:33:19 -0400] [18875] [INFO] Using worker: aiohttp.GunicornWebWorker
[2024-09-21 09:33:19 -0400] [18878] [INFO] Booting worker with pid: 18878
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 09:33:19,857 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 09:33:19,860 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 09:33:19,860 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
[2024-09-21 09:33:19 -0400] [18879] [INFO] Booting worker with pid: 18879
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 09:33:19,910 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 09:33:19,912 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 09:33:19,913 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
INFO - 2024-09-21 09:33:19,977 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 09:33:19,978 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
[2024-09-21 09:33:19 -0400] [18880] [INFO] Booting worker with pid: 18880
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 09:33:19,998 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 09:33:20,001 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 09:33:20,001 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
INFO - 2024-09-21 09:33:20,024 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 09:33:20,025 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
[2024-09-21 09:33:20 -0400] [18881] [INFO] Booting worker with pid: 18881
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 09:33:20,083 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 09:33:20,086 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 09:33:20,087 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
INFO - 2024-09-21 09:33:20,112 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 09:33:20,112 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
[2024-09-21 09:33:20 -0400] [18882] [INFO] Booting worker with pid: 18882
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 09:33:20,128 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 09:33:20,131 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 09:33:20,132 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
[2024-09-21 09:33:20 -0400] [18883] [INFO] Booting worker with pid: 18883
INFO - 2024-09-21 09:33:20,211 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 09:33:20,212 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 09:33:20,218 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 09:33:20,222 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 09:33:20,223 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
[2024-09-21 09:33:20 -0400] [18884] [INFO] Booting worker with pid: 18884
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 09:33:20,244 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 09:33:20,247 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 09:33:20,247 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
INFO - 2024-09-21 09:33:20,254 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 09:33:20,255 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
[2024-09-21 09:33:20 -0400] [18885] [INFO] Booting worker with pid: 18885
INFO - 2024-09-21 09:33:20,341 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 09:33:20,341 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 09:33:20,343 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 09:33:20,345 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 09:33:20,345 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
INFO - 2024-09-21 09:33:20,356 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 09:33:20,357 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
INFO - 2024-09-21 09:33:20,449 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 09:33:20,450 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
INFO - 2024-09-21 09:33:57,058 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/webapp/WEB-INF/web.xml' with 1 incidents'
INFO - 2024-09-21 09:33:57,059 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/model/Order.java' with 10 incidents'
INFO - 2024-09-21 09:33:57,060 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/webapp/WEB-INF/beans.xml' with 5 incidents'
INFO - 2024-09-21 09:33:57,060 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/model/OrderItem.java' with 6 incidents'
INFO - 2024-09-21 09:33:57,060 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'pom.xml' with 12 incidents'
INFO - 2024-09-21 09:33:57,172 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 1 incident(s) for src/main/webapp/WEB-INF/web.xml
INFO - 2024-09-21 09:33:57,172 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 5 incident(s) for src/main/webapp/WEB-INF/beans.xml
INFO - 2024-09-21 09:33:57,173 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 12 incident(s) for pom.xml
INFO - 2024-09-21 09:33:57,173 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 6 incident(s) for src/main/java/com/redhat/coolstore/model/OrderItem.java
INFO - 2024-09-21 09:33:57,173 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 10 incident(s) for src/main/java/com/redhat/coolstore/model/Order.java
INFO - 2024-09-21 09:34:18,061 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:73 - post_get_incident_solutions_for_file()] - END - completed in '21.00247287750244s: - App: 'coolstore', File: 'src/main/webapp/WEB-INF/web.xml' with 1 incidents'
INFO - 2024-09-21 09:34:18,066 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/persistence/Resources.java' with 6 incidents'
INFO - 2024-09-21 09:34:18,145 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 6 incident(s) for src/main/java/com/redhat/coolstore/persistence/Resources.java
INFO - 2024-09-21 09:44:04,440 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:73 - post_get_incident_solutions_for_file()] - END - completed in '607.3796589374542s: - App: 'coolstore', File: 'pom.xml' with 12 incidents'
[2024-09-21 09:44:04 -0400] [18880] [ERROR] Error handling request
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 132, in handle_request
response.raise_for_status()
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_models.py", line 759, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_protocol.py", line 452, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_app.py", line 543, in _handle
resp = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 69, in post_get_incident_solutions_for_file
raise e
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 57, in post_get_incident_solutions_for_file
result: UpdatedFileContent = kai_application.get_incident_solutions_for_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/kai_application/kai_application.py", line 145, in get_incident_solutions_for_file
solutions = self.incident_store.find_solutions(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/incident_store/incident_store.py", line 430, in find_solutions
processed_solution = self.solution_producer.post_process_one(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/solution_handling/production.py", line 167, in post_process_one
llm_result = self.model_provider.llm.invoke(rendered_template)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 277, in invoke
self.generate_prompt(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 777, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 634, in generate
raise e
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 624, in generate
self._generate_with_cache(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 846, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 237, in _generate
result = handle_stream() if self.streaming else handle_non_stream()
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 202, in handle_stream
for result in self._stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 168, in _stream
for response in self.client.text.chat.create_stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/chat/chat_generation_service.py", line 179, in create_stream
yield from generation_stream_handler(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/generation/_generation_utils.py", line 13, in generation_stream_handler
for response in generator:
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/httpx_client.py", line 30, in post_stream
with connect_sse(
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx_sse/_api.py", line 54, in connect_sse
with client.stream(method, url, headers=headers, **kwargs) as response:
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 871, in stream
response = self.send(
^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 148, in handle_request
raise self._create_exception(
genai.exceptions.ApiResponseException: Failed to handle request to https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10.
{
"error": "Bad Request",
"extensions": {
"code": "INVALID_INPUT",
"state": null
},
"message": "The user message is too long. Send a short message or decrease property min_new_tokens",
"status_code": 400
}
INFO - 2024-09-21 09:44:04,445 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/model/InventoryEntity.java' with 6 incidents'
INFO - 2024-09-21 09:44:04,449 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'pom.xml' with 12 incidents'
INFO - 2024-09-21 09:44:04,459 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 6 incident(s) for src/main/java/com/redhat/coolstore/model/InventoryEntity.java
INFO - 2024-09-21 09:44:04,566 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 12 incident(s) for pom.xml
INFO - 2024-09-21 09:44:04,945 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:73 - post_get_incident_solutions_for_file()] - END - completed in '0.4969329833984375s: - App: 'coolstore', File: 'pom.xml' with 12 incidents'
[2024-09-21 09:44:04 -0400] [18879] [ERROR] Error handling request
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 132, in handle_request
response.raise_for_status()
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_models.py", line 759, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_protocol.py", line 452, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_app.py", line 543, in _handle
resp = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 69, in post_get_incident_solutions_for_file
raise e
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 57, in post_get_incident_solutions_for_file
result: UpdatedFileContent = kai_application.get_incident_solutions_for_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/kai_application/kai_application.py", line 145, in get_incident_solutions_for_file
solutions = self.incident_store.find_solutions(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/incident_store/incident_store.py", line 430, in find_solutions
processed_solution = self.solution_producer.post_process_one(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/solution_handling/production.py", line 167, in post_process_one
llm_result = self.model_provider.llm.invoke(rendered_template)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 277, in invoke
self.generate_prompt(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 777, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 634, in generate
raise e
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 624, in generate
self._generate_with_cache(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 846, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 237, in _generate
result = handle_stream() if self.streaming else handle_non_stream()
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 202, in handle_stream
for result in self._stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 168, in _stream
for response in self.client.text.chat.create_stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/chat/chat_generation_service.py", line 179, in create_stream
yield from generation_stream_handler(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/generation/_generation_utils.py", line 13, in generation_stream_handler
for response in generator:
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/httpx_client.py", line 30, in post_stream
with connect_sse(
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx_sse/_api.py", line 54, in connect_sse
with client.stream(method, url, headers=headers, **kwargs) as response:
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 871, in stream
response = self.send(
^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 148, in handle_request
raise self._create_exception(
genai.exceptions.ApiResponseException: Failed to handle request to https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10.
{
"error": "Bad Request",
"extensions": {
"code": "INVALID_INPUT",
"state": null
},
"message": "The user message is too long. Send a short message or decrease property min_new_tokens",
"status_code": 400
}
INFO - 2024-09-21 09:44:04,950 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'pom.xml' with 12 incidents'
INFO - 2024-09-21 09:44:04,958 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 12 incident(s) for pom.xml
INFO - 2024-09-21 09:44:05,274 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:73 - post_get_incident_solutions_for_file()] - END - completed in '0.32431578636169434s: - App: 'coolstore', File: 'pom.xml' with 12 incidents'
[2024-09-21 09:44:05 -0400] [18879] [ERROR] Error handling request
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 132, in handle_request
response.raise_for_status()
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_models.py", line 759, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_protocol.py", line 452, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_app.py", line 543, in _handle
resp = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 69, in post_get_incident_solutions_for_file
raise e
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 57, in post_get_incident_solutions_for_file
result: UpdatedFileContent = kai_application.get_incident_solutions_for_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/kai_application/kai_application.py", line 145, in get_incident_solutions_for_file
solutions = self.incident_store.find_solutions(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/incident_store/incident_store.py", line 430, in find_solutions
processed_solution = self.solution_producer.post_process_one(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/solution_handling/production.py", line 167, in post_process_one
llm_result = self.model_provider.llm.invoke(rendered_template)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 277, in invoke
self.generate_prompt(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 777, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 634, in generate
raise e
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 624, in generate
self._generate_with_cache(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 846, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 237, in _generate
result = handle_stream() if self.streaming else handle_non_stream()
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 202, in handle_stream
for result in self._stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 168, in _stream
for response in self.client.text.chat.create_stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/chat/chat_generation_service.py", line 179, in create_stream
yield from generation_stream_handler(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/generation/_generation_utils.py", line 13, in generation_stream_handler
for response in generator:
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/httpx_client.py", line 30, in post_stream
with connect_sse(
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx_sse/_api.py", line 54, in connect_sse
with client.stream(method, url, headers=headers, **kwargs) as response:
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 871, in stream
response = self.send(
^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 148, in handle_request
raise self._create_exception(
genai.exceptions.ApiResponseException: Failed to handle request to https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10.
{
"error": "Bad Request",
"extensions": {
"code": "INVALID_INPUT",
"state": null
},
"message": "The user message is too long. Send a short message or decrease property min_new_tokens",
"status_code": 400
}
INFO - 2024-09-21 09:44:05,282 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'pom.xml' with 12 incidents'
INFO - 2024-09-21 09:44:05,295 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 12 incident(s) for pom.xml
INFO - 2024-09-21 09:44:05,611 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:73 - post_get_incident_solutions_for_file()] - END - completed in '0.3287220001220703s: - App: 'coolstore', File: 'pom.xml' with 12 incidents'
[2024-09-21 09:44:05 -0400] [18879] [ERROR] Error handling request
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 132, in handle_request
response.raise_for_status()
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_models.py", line 759, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_protocol.py", line 452, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_app.py", line 543, in _handle
resp = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 69, in post_get_incident_solutions_for_file
raise e
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 57, in post_get_incident_solutions_for_file
result: UpdatedFileContent = kai_application.get_incident_solutions_for_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/kai_application/kai_application.py", line 145, in get_incident_solutions_for_file
solutions = self.incident_store.find_solutions(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/incident_store/incident_store.py", line 430, in find_solutions
processed_solution = self.solution_producer.post_process_one(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/solution_handling/production.py", line 167, in post_process_one
llm_result = self.model_provider.llm.invoke(rendered_template)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 277, in invoke
self.generate_prompt(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 777, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 634, in generate
raise e
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 624, in generate
self._generate_with_cache(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 846, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 237, in _generate
result = handle_stream() if self.streaming else handle_non_stream()
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 202, in handle_stream
for result in self._stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 168, in _stream
for response in self.client.text.chat.create_stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/chat/chat_generation_service.py", line 179, in create_stream
yield from generation_stream_handler(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/generation/_generation_utils.py", line 13, in generation_stream_handler
for response in generator:
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/httpx_client.py", line 30, in post_stream
with connect_sse(
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx_sse/_api.py", line 54, in connect_sse
with client.stream(method, url, headers=headers, **kwargs) as response:
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 871, in stream
response = self.send(
^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 148, in handle_request
raise self._create_exception(
genai.exceptions.ApiResponseException: Failed to handle request to https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10.
{
"error": "Bad Request",
"extensions": {
"code": "INVALID_INPUT",
"state": null
},
"message": "The user message is too long. Send a short message or decrease property min_new_tokens",
"status_code": 400
}
INFO - 2024-09-21 09:44:05,618 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'pom.xml' with 12 incidents'
INFO - 2024-09-21 09:44:05,629 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 12 incident(s) for pom.xml
INFO - 2024-09-21 09:44:05,952 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:73 - post_get_incident_solutions_for_file()] - END - completed in '0.333981990814209s: - App: 'coolstore', File: 'pom.xml' with 12 incidents'
[2024-09-21 09:44:05 -0400] [18879] [ERROR] Error handling request
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 132, in handle_request
response.raise_for_status()
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_models.py", line 759, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_protocol.py", line 452, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_app.py", line 543, in _handle
resp = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 69, in post_get_incident_solutions_for_file
raise e
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 57, in post_get_incident_solutions_for_file
result: UpdatedFileContent = kai_application.get_incident_solutions_for_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/kai_application/kai_application.py", line 145, in get_incident_solutions_for_file
solutions = self.incident_store.find_solutions(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/incident_store/incident_store.py", line 430, in find_solutions
processed_solution = self.solution_producer.post_process_one(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/solution_handling/production.py", line 167, in post_process_one
llm_result = self.model_provider.llm.invoke(rendered_template)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 277, in invoke
self.generate_prompt(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 777, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 634, in generate
raise e
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 624, in generate
self._generate_with_cache(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 846, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 237, in _generate
result = handle_stream() if self.streaming else handle_non_stream()
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 202, in handle_stream
for result in self._stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 168, in _stream
for response in self.client.text.chat.create_stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/chat/chat_generation_service.py", line 179, in create_stream
yield from generation_stream_handler(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/generation/_generation_utils.py", line 13, in generation_stream_handler
for response in generator:
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/httpx_client.py", line 30, in post_stream
with connect_sse(
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx_sse/_api.py", line 54, in connect_sse
with client.stream(method, url, headers=headers, **kwargs) as response:
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 871, in stream
response = self.send(
^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 148, in handle_request
raise self._create_exception(
genai.exceptions.ApiResponseException: Failed to handle request to https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10.
{
"error": "Bad Request",
"extensions": {
"code": "INVALID_INPUT",
"state": null
},
"message": "The user message is too long. Send a short message or decrease property min_new_tokens",
"status_code": 400
}
INFO - 2024-09-21 09:44:05,960 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'pom.xml' with 12 incidents'
INFO - 2024-09-21 09:44:05,972 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 12 incident(s) for pom.xml
INFO - 2024-09-21 09:44:06,285 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:73 - post_get_incident_solutions_for_file()] - END - completed in '0.32546377182006836s: - App: 'coolstore', File: 'pom.xml' with 12 incidents'
[2024-09-21 09:44:06 -0400] [18879] [ERROR] Error handling request
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 132, in handle_request
response.raise_for_status()
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_models.py", line 759, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_protocol.py", line 452, in _handle_request
resp = await request_handler(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/aiohttp/web_app.py", line 543, in _handle
resp = await handler(request)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 69, in post_get_incident_solutions_for_file
raise e
File "/Users/jmatthews/git/jwmatthews/kai/kai/routes/get_incident_solutions_for_file.py", line 57, in post_get_incident_solutions_for_file
result: UpdatedFileContent = kai_application.get_incident_solutions_for_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/kai_application/kai_application.py", line 145, in get_incident_solutions_for_file
solutions = self.incident_store.find_solutions(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/incident_store/incident_store.py", line 430, in find_solutions
processed_solution = self.solution_producer.post_process_one(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/kai/service/solution_handling/production.py", line 167, in post_process_one
llm_result = self.model_provider.llm.invoke(rendered_template)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 277, in invoke
self.generate_prompt(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 777, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 634, in generate
raise e
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 624, in generate
self._generate_with_cache(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 846, in _generate_with_cache
result = self._generate(
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 237, in _generate
result = handle_stream() if self.streaming else handle_non_stream()
^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 202, in handle_stream
for result in self._stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/extensions/langchain/chat_llm.py", line 168, in _stream
for response in self.client.text.chat.create_stream(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/chat/chat_generation_service.py", line 179, in create_stream
yield from generation_stream_handler(
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/text/generation/_generation_utils.py", line 13, in generation_stream_handler
for response in generator:
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/httpx_client.py", line 30, in post_stream
with connect_sse(
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx_sse/_api.py", line 54, in connect_sse
with client.stream(method, url, headers=headers, **kwargs) as response:
File "/opt/homebrew/Cellar/python@3.12/3.12.2_1/Frameworks/Python.framework/Versions/3.12/lib/python3.12/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 871, in stream
response = self.send(
^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 915, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 943, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 980, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/httpx/_client.py", line 1016, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jmatthews/git/jwmatthews/kai/env/lib/python3.12/site-packages/genai/_utils/http_client/retry_transport.py", line 148, in handle_request
raise self._create_exception(
genai.exceptions.ApiResponseException: Failed to handle request to https://bam-api.res.ibm.com/v2/text/chat_stream?version=2024-01-10.
{
"error": "Bad Request",
"extensions": {
"code": "INVALID_INPUT",
"state": null
},
"message": "The user message is too long. Send a short message or decrease property min_new_tokens",
"status_code": 400
}
INFO - 2024-09-21 09:44:06,290 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/rest/CartEndpoint.java' with 9 incidents'
INFO - 2024-09-21 09:44:06,301 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 9 incident(s) for src/main/java/com/redhat/coolstore/rest/CartEndpoint.java
[2024-09-21 10:26:12 -0400] [18875] [INFO] Handling signal: winch
INFO - 2024-09-21 10:28:26,930 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:73 - post_get_incident_solutions_for_file()] - END - completed in '3269.8703088760376s: - App: 'coolstore', File: 'src/main/webapp/WEB-INF/beans.xml' with 5 incidents'
INFO - 2024-09-21 10:28:26,933 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/resources/META-INF/persistence.xml' with 8 incidents'
INFO - 2024-09-21 10:28:26,939 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 8 incident(s) for src/main/resources/META-INF/persistence.xml
INFO - 2024-09-21 10:28:26,947 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/rest/OrderEndpoint.java' with 8 incidents'
INFO - 2024-09-21 10:28:27,051 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 8 incident(s) for src/main/java/com/redhat/coolstore/rest/OrderEndpoint.java
[2024-09-21 10:35:08 -0400] [18875] [CRITICAL] WORKER TIMEOUT (pid:18880)
INFO - 2024-09-21 10:35:09,659 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/model/InventoryEntity.java' with 6 incidents'
INFO - 2024-09-21 10:35:09,682 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 6 incident(s) for src/main/java/com/redhat/coolstore/model/InventoryEntity.java
[2024-09-21 10:35:09 -0400] [18875] [CRITICAL] WORKER TIMEOUT (pid:18882)
[2024-09-21 10:35:09 -0400] [18875] [CRITICAL] WORKER TIMEOUT (pid:18883)
[2024-09-21 10:35:09 -0400] [18875] [CRITICAL] WORKER TIMEOUT (pid:18884)
[2024-09-21 10:35:09 -0400] [18875] [ERROR] Worker (pid:18880) was sent SIGKILL! Perhaps out of memory?
[2024-09-21 10:35:09 -0400] [20257] [INFO] Booting worker with pid: 20257
[2024-09-21 10:35:09 -0400] [18875] [ERROR] Worker (pid:18883) was sent SIGKILL! Perhaps out of memory?
[2024-09-21 10:35:09 -0400] [18875] [ERROR] Worker (pid:18882) was sent SIGKILL! Perhaps out of memory?
[2024-09-21 10:35:09 -0400] [18875] [ERROR] Worker (pid:18884) was sent SIGKILL! Perhaps out of memory?
[2024-09-21 10:35:09 -0400] [20258] [INFO] Booting worker with pid: 20258
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 10:35:09,823 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 10:35:09,828 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 10:35:09,828 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 10:35:09,829 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
INFO - 2024-09-21 10:35:09,830 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 10:35:09,831 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
[2024-09-21 10:35:09 -0400] [20259] [INFO] Booting worker with pid: 20259
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 10:35:09,863 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 10:35:09,866 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 10:35:09,867 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
[2024-09-21 10:35:09 -0400] [20260] [INFO] Booting worker with pid: 20260
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 10:35:09,876 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 10:35:09,879 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 10:35:09,879 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
INFO - 2024-09-21 10:35:09,964 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 10:35:09,964 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 10:35:09,965 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
INFO - 2024-09-21 10:35:09,965 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
INFO - 2024-09-21 10:35:09,967 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/model/ShoppingCart.java' with 1 incidents'
INFO - 2024-09-21 10:35:09,993 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 10:35:09,994 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
INFO - 2024-09-21 10:35:10,002 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 10:35:10,002 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
INFO - 2024-09-21 10:35:10,055 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 1 incident(s) for src/main/java/com/redhat/coolstore/model/ShoppingCart.java
INFO - 2024-09-21 10:35:30,654 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/persistence/Resources.java' with 6 incidents'
INFO - 2024-09-21 10:35:30,771 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 6 incident(s) for src/main/java/com/redhat/coolstore/persistence/Resources.java
[2024-09-21 10:35:30 -0400] [18875] [CRITICAL] WORKER TIMEOUT (pid:18885)
[2024-09-21 10:35:31 -0400] [18875] [ERROR] Worker (pid:18885) was sent SIGKILL! Perhaps out of memory?
[2024-09-21 10:35:31 -0400] [20264] [INFO] Booting worker with pid: 20264
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 10:35:32,011 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 10:35:32,016 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 10:35:32,016 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
INFO - 2024-09-21 10:35:32,140 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 10:35:32,140 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
INFO - 2024-09-21 10:35:57,216 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:73 - post_get_incident_solutions_for_file()] - END - completed in '47.249579668045044s: - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/model/ShoppingCart.java' with 1 incidents'
INFO - 2024-09-21 10:35:57,218 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/model/Order.java' with 10 incidents'
INFO - 2024-09-21 10:35:57,224 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/rest/ProductEndpoint.java' with 3 incidents'
INFO - 2024-09-21 10:35:57,225 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 10 incident(s) for src/main/java/com/redhat/coolstore/model/Order.java
INFO - 2024-09-21 10:35:57,310 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 3 incident(s) for src/main/java/com/redhat/coolstore/rest/ProductEndpoint.java
INFO - 2024-09-21 10:45:18,925 - kai.routes.get_incident_solutions_for_file - [get_incident_solutions_for_file.py:38 - post_get_incident_solutions_for_file()] - START - App: 'coolstore', File: 'src/main/java/com/redhat/coolstore/rest/CartEndpoint.java' with 9 incidents'
INFO - 2024-09-21 10:45:19,059 - kai.service.kai_application.kai_application - [ kai_application.py:135 - get_incident_solutions_for_file()] - Processing incident batch 1/1 with 9 incident(s) for src/main/java/com/redhat/coolstore/rest/CartEndpoint.java
[2024-09-21 10:45:19 -0400] [18875] [CRITICAL] WORKER TIMEOUT (pid:18881)
[2024-09-21 10:45:20 -0400] [18875] [ERROR] Worker (pid:18881) was sent SIGKILL! Perhaps out of memory?
[2024-09-21 10:45:20 -0400] [20505] [INFO] Booting worker with pid: 20505
Config loaded: KaiConfig(log_level='info', file_log_level='debug', log_dir='$pwd/logs', demo_mode=False, trace_enabled=True, gunicorn_workers=8, gunicorn_timeout=3600, gunicorn_bind='0.0.0.0:8080', incident_store=KaiConfigIncidentStore(solution_detectors=<SolutionDetectorKind.NAIVE: 'naive'>, solution_producers=<SolutionProducerKind.LLM_LAZY: 'llm_lazy'>, args=KaiConfigIncidentStorePostgreSQLArgs(provider=<KaiConfigIncidentStoreProvider.POSTGRESQL: 'postgresql'>, host='127.0.0.1', database='kai', user='kai', password='dog8code', connection_string=None, solution_detection=<SolutionDetectorKind.NAIVE: 'naive'>)), models=KaiConfigModels(provider='ChatIBMGenAI', args={'model_id': 'meta-llama/llama-3-70b-instruct', 'parameters': {'max_new_tokens': 2048}}, template=None, llama_header=None, llm_retries=5, llm_retry_delay=10.0), solution_consumers=[<SolutionConsumerKind.DIFF_ONLY: 'diff_only'>, <SolutionConsumerKind.LLM_SUMMARY: 'llm_summary'>])
Console logging for 'kai' is set to level 'INFO'
File logging for 'kai' is set to level 'DEBUG' writing to file: '/Users/jmatthews/git/jwmatthews/kai/logs/kai_server.log'
INFO - 2024-09-21 10:45:20,542 - kai.service.kai_application.kai_application - [ kai_application.py:55 - __init__()] - Tracing enabled.
INFO - 2024-09-21 10:45:20,547 - kai.service.kai_application.kai_application - [ kai_application.py:64 - __init__()] - Selected provider: ChatIBMGenAI
INFO - 2024-09-21 10:45:20,548 - kai.service.kai_application.kai_application - [ kai_application.py:65 - __init__()] - Selected model: meta-llama/llama-3-70b-instruct
INFO - 2024-09-21 10:45:20,663 - kai.service.kai_application.kai_application - [ kai_application.py:85 - __init__()] - Selected incident store: postgresql
INFO - 2024-09-21 10:45:20,664 - kai.server - [ server.py:58 - app()] - Kai server is ready to receive requests.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment