Category | Field | Type | Description |
---|---|---|---|
General LLM Interaction Fields | gen_ai.prompt | text | The full text of the user's request to the gen_ai. |
gen_ai.usage.prompt_tokens | integer | Number of tokens in the user's request. | |
gen_ai.completion | text | The full text of the LLM's response. | |
gen_ai.usage.completion_tokens | integer | Number of tokens in the LLM's response. | |
gen_ai.system | keyword | Name of the LLM foundation model vendor. | |
gen_ai.user.id | keyword | Unique identifier for the user. | |
gen_ai.request.id | keyword | Unique identifier for the LLM request. | |
gen_ai.response.id | keyword | Unique identifier for the LLM response. | |
gen_ai.response.error_code | keyword | Error code returned in the LLM response. | |
gen_ai.response.finish_reasons | keyword array | Reason the LLM response stopped. | |
gen_ai.request.timestamp | date | Timestamp when the request was made. | |
gen_ai.response.timestamp | date | Timestamp when the response was received. | |
gen_ai.request.model.id | keyword | ID of the LLM model a request is being made to. | |
gen_ai.request.max_tokens | integer | Maximum number of tokens the LLM generates for a request. | |
gen_ai.request.temperature | float | Temperature setting for the LLM request. | |
gen_ai.request.top_k | float | The top_k sampling setting for the LLM request. | |
gen_ai.request.top_p | float | The top_p sampling setting for the LLM request. | |
gen_ai.request.model_version | keyword | Version of the LLM model used to generate the response. | |
gen_ai.request.model.role | keyword | Role of the LLM model in the interaction. | |
gen_ai.request.model.type | keyword | Type of LLM model. | |
gen_ai.request.model.description | keyword | Description of the LLM model. | |
gen_ai.request.model.instructions | text | Custom instructions for the LLM model. | |
Text Quality and Relevance Metric Fields | gen_ai.text.readability_score | float | Measures the readability level of the text. |
gen_ai.text.complexity_score | float | Evaluates the complexity of the text. | |
gen_ai.text.similarity_score | float | Measures the similarity between the prompt and response. | |
Security Metric Fields | gen_ai.security.regex_pattern_count | integer | Counts occurrences of strings matching user-defined regex patterns. |
gen_ai.security.jailbreak_score | float | Measures similarity to known jailbreak attempts. | |
gen_ai.security.prompt_injection_score | float | Measures similarity to known prompt injection attacks. | |
gen_ai.security.hallucination_consistency | float | Consistency check between multiple responses. | |
gen_ai.security.refusal_score | float | Measures similarity to known LLM refusal responses. | |
Policy Enforcement Fields | gen_ai.policy.name | keyword | Name of the specific policy that was triggered. |
gen_ai.policy.violation | boolean | Specifies if a security policy was violated. | |
gen_ai.policy.action | keyword | Action taken due to a policy violation, such as blocking, alerting, or modifying the content. | |
gen_ai.policy.match_detail | nested | Details about what specifically triggered the policy, including matched words, phrases, or patterns. | |
gen_ai.policy.confidence | float | Confidence level in the policy match that triggered the action, quantifying how closely the identified content matched the policy criteria. | |
Threat Analysis Fields | gen_ai.threat.risk_score | float | Numerical score indicating the potential risk associated with the response. |
gen_ai.threat.type | keyword | Type of threat detected in the LLM interaction. | |
gen_ai.threat.detected | boolean | Whether a security threat was detected. | |
gen_ai.threat.category | keyword | Category of the detected security threat. | |
gen_ai.threat.description | text | Description of the detected security threat. | |
gen_ai.threat.action | keyword | Recommended action to mitigate the detected security threat. | |
gen_ai.threat.source | keyword | Source of the detected security threat. | |
gen_ai.threat.signature | keyword | Signature of the detected security threat. | |
gen_ai.threat.yara_matches | nested | Stores results from YARA scans including rule matches and categories. | |
Compliance Fields | gen_ai.compliance.violation_detected | boolean | Indicates if any compliance violation was detected during the interaction. |
gen_ai.compliance.violation_code | keyword | Code identifying the specific compliance rule that was violated. | |
gen_ai.compliance.response_triggered | keyword array | Lists compliance-related filters that were triggered during the processing of the response, such as data privacy filters or regulatory compliance checks. | |
gen_ai.compliance.request_triggered | keyword array | Lists compliance-related filters that were triggered during the processing of the request, such as data privacy filters or regulatory compliance checks. | |
OWASP Top Ten Specific Fields | gen_ai.owasp.id | keyword | Identifier for the OWASP risk addressed. |
gen_ai.owasp.description | text | Description of the OWASP risk triggered. | |
Security Tools Analysis Fields | gen_ai.analysis.tool_names | keyword array | Name of the security or analysis tools used. |
gen_ai.analysis.function | keyword | Name of the security or analysis function used. | |
gen_ai.analysis.findings | nested | Detailed findings from security tools. | |
gen_ai.analysis.action_recommended | keyword | Recommended actions based on the analysis. | |
Sentiment and Toxicity Analysis Fields | gen_ai.sentiment.score | float | Sentiment analysis score. |
gen_ai.sentiment.toxicity_score | float | Toxicity analysis score. | |
gen_ai.sentiment.content_inappropriate | boolean | Whether the content was flagged as inappropriate or sensitive. | |
gen_ai.sentiment.content_categories | keyword array | Categories of content identified as sensitive or requiring moderation. | |
Performance Metric Fields | gen_ai.performance.response_time | long | Time taken by the LLM to generate a response in milliseconds. |
gen_ai.performance.request_size | long | Size of the request payload in bytes. | |
gen_ai.performance.start_response_time | long | Time taken by the LLM to send first response byte in milliseconds. | |
gen_ai.performance.response_size | long | Size of the response payload in bytes. |
Last active
May 2, 2024 03:48
-
-
Save Mikaayenson/cf03f6d3998e16834c1274f007f2666c to your computer and use it in GitHub Desktop.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment