Skip to content

Instantly share code, notes, and snippets.

@Bogyie
Created July 18, 2024 05:40
Show Gist options
  • Save Bogyie/52c03ad83c9c976e00c977dd333c8c39 to your computer and use it in GitHub Desktop.
Save Bogyie/52c03ad83c9c976e00c977dd333c8c39 to your computer and use it in GitHub Desktop.
Description of your gist
This gist exceeds the recommended number of files (~10). To access all files, please clone this gist.
title description
Steampipe Table: aws_accessanalyzer_analyzer - Query AWS Access Analyzer using SQL
Allows users to query Access Analyzer Analyzer in AWS IAM to retrieve information about analyzers.

Table: aws_accessanalyzer_analyzer - Query AWS Access Analyzer using SQL

The AWS Access Analyzer is a service that helps to identify resources in your organization and accounts, such as S3 buckets or IAM roles, that are shared with an external entity. It uses logic-based reasoning to analyze the resource-based policies in your AWS environment, allowing you to identify unintended access to your resources and data. This helps in mitigating potential security risks.

Table Usage Guide

The aws_accessanalyzer_analyzer table in Steampipe provides you with information about analyzers within AWS IAM Access Analyzer. This table allows you, as a DevOps engineer, to query analyzer-specific details, including the analyzer ARN, type, status, and associated metadata. You can utilize this table to gather insights on analyzers, such as the status of each analyzer, the type of analyzer, and the resource that was analyzed. The schema outlines the various attributes of the Access Analyzer for you, including the analyzer ARN, creation time, last resource scanned, and associated tags.

Examples

Basic info

Explore the status and type of your AWS Access Analyzer to understand when the last resource was analyzed. This could be beneficial for maintaining security and compliance in your AWS environment.The query provides an overview of AWS Access Analyzer analyzers in a user's environment. It helps in monitoring the current status and types of analyzers, along with the details of the most recent resources analyzed. This is useful for administrators and security personnel to ensure that their AWS environment is continuously scanned for compliance and security risks, and to stay informed about the analyzer's activities and findings.

select
  name,
  last_resource_analyzed,
  last_resource_analyzed_at,
  status,
  type
from
  aws_accessanalyzer_analyzer;
select
  name,
  last_resource_analyzed,
  last_resource_analyzed_at,
  status,
  type
from
  aws_accessanalyzer_analyzer;

List analyzers which are enabled

Determine the areas in which AWS Access Analyzer is active to gain insights into potential security and access control issues. This is useful for maintaining optimal security practices and ensuring that all analyzers are functioning as expected.The query identifies and provides details on all active AWS Access Analyzer analyzers. It is particularly useful for ensuring that the necessary analyzers are operational and actively scanning resources. This information aids in maintaining continuous compliance and security oversight by highlighting only those analyzers currently in an active state, along with their last analyzed resources and associated tags. This enables efficient tracking and management of security analysis tools within the AWS environment.

select
  name,
  status
  last_resource_analyzed,
  last_resource_analyzed_at,
  tags
from
  aws_accessanalyzer_analyzer
where
  status = 'ACTIVE';
select
  name,
  status,
  last_resource_analyzed,
  last_resource_analyzed_at,
  tags
from
  aws_accessanalyzer_analyzer
where
  status = 'ACTIVE';

List analyzers with findings that need to be resolved

Explore which active AWS Access Analyzer instances have findings that require resolution. This is useful in identifying potential security risks that need immediate attention.The query focuses on identifying active AWS Access Analyzer analyzers that have unresolved findings. It serves as a tool for security and compliance teams to pinpoint which analyzers have detected potential issues, needing immediate attention. By filtering for active analyzers with existing findings, it streamlines the process of addressing security or compliance concerns within the AWS environment, ensuring that no critical issues are overlooked. This aids in maintaining a secure and compliant cloud infrastructure.

select
  name,
  status,
  type,
  last_resource_analyzed
from
  aws_accessanalyzer_analyzer
where
  status = 'ACTIVE'
  and findings is not null;
select
  name,
  status,
  type,
  last_resource_analyzed
from
  aws_accessanalyzer_analyzer
where
  status = 'ACTIVE'
  and findings is not null;
title description
Steampipe Table: aws_accessanalyzer_finding - Query AWS Access Analyzer Findings using SQL
Allows users to query Access Analyzer findings in AWS IAM to retrieve detailed information about potential security risks.

Table: aws_accessanalyzer_finding - Query AWS Access Analyzer Findings using SQL

AWS Access Analyzer findings provide detailed information about potential security risks in your AWS environment. These findings are generated when Access Analyzer identifies resources that are shared with an external entity, highlighting potential unintended access. By analyzing the resource-based policies, Access Analyzer helps you understand how access to your resources is granted and suggests modifications to achieve desired access policies, enhancing your security posture.

Table Usage Guide

The aws_accessanalyzer_finding table in Steampipe allows you to query information related to findings from the AWS IAM Access Analyzer. This table is essential for security and compliance teams, enabling them to identify, analyze, and manage findings related to resource access policies. Through this table, users can access detailed information about each finding, including the actions involved, the condition that led to the finding, the resource and principal involved, and the finding's status. By leveraging this table, you can efficiently address security and compliance issues in your AWS environment.

Examples

Basic Info

Retrieve essential details of findings to understand potential access issues and their current status. This query helps in identifying the nature of each finding, the resources involved, and the actions recommended or taken to resolve these issues.

select
  id,
  access_analyzer_arn,
  analyzed_at,
  resource_type,
  status,
  is_public
from
  aws_accessanalyzer_finding;
select
  id,
  analyzed_at,
  resource_type,
  status,
  is_public
from
  aws_accessanalyzer_finding;

Findings involving public access

Identify findings where resources are potentially exposed to public access. Highlighting such findings is critical for prioritizing issues that may lead to unauthorized access. This query helps in swiftly identifying and addressing potential vulnerabilities, ensuring that resources are adequately secured against public exposure.

select
  id,
  resource_type,
  access_analyzer_arn,
  status,
  is_public
from
  aws_accessanalyzer_finding
where
  is_public = true;
select
  id,
  resource_type,
  access_analyzer_arn,
  status,
  is_public
from
  aws_accessanalyzer_finding
where
  is_public = true;

Findings by resource type

Aggregate findings by resource type to focus remediation efforts on specific types of resources. This categorization helps in streamlining the security review process by allowing teams to prioritize resources based on their sensitivity and exposure.

select
  resource_type,
  count(*) as findings_count
from
  aws_accessanalyzer_finding
group by
  resource_type;
select
  resource_type,
  count(*) as findings_count
from
  aws_accessanalyzer_finding
group by
  resource_type;

Recent findings

Focus on findings that have been identified recently to address potentially new security risks. This query aids in maintaining an up-to-date security posture by ensuring that recent findings are promptly reviewed and addressed.

select
  id,
  resource,
  status,
  analyzed_at
from
  aws_accessanalyzer_finding
where
  analyzed_at > current_date - interval '30 days';
select
  id,
  resource,
  status,
  analyzed_at
from
  aws_accessanalyzer_finding
where
  analyzed_at > date('now', '-30 day');
title description
Steampipe Table: aws_account - Query AWS Accounts using SQL
Allows users to query AWS Account information, including details about the account's status, owner, and associated resources.

Table: aws_account - Query AWS Accounts using SQL

The AWS Account is a container for AWS resources. It is used to sign up for, organize, and manage AWS services, and it provides administrative control access to resources. An AWS Account contains its own data, with its own settings, including billing and payment information.

Table Usage Guide

The aws_account table in Steampipe provides you with information about your AWS Account. This table allows you, as a DevOps engineer, to query account-specific details, including the account status, owner, and associated resources. You can utilize this table to gather insights on your AWS account, such as the account's ARN, creation date, email address, and more. The schema outlines the various attributes of your AWS account, including the account ID, account alias, and whether your account is a root account.

Examples

Basic AWS account info

Discover the segments that are associated with your AWS account, including details about the organization and the master account. This can help you manage and understand the relationships within your AWS structure.This query provides a snapshot of basic details about your AWS account, including its alias and associated organization details. It's useful for quickly accessing key information about your account, particularly in larger organizations where multiple accounts may be in use.

select
  alias,
  arn,
  organization_id,
  organization_master_account_email,
  organization_master_account_id
from
  aws_account
  cross join jsonb_array_elements(account_aliases) as alias;
select
  alias.value as alias,
  arn,
  organization_id,
  organization_master_account_email,
  organization_master_account_id
from
  aws_account,
  json_each(account_aliases) as alias;

Organization policy of aws account

This query allows you to delve into the various policies within your AWS account, particularly focusing on the type and status of each policy. It's useful for managing and tracking policy configurations across your organization, ensuring compliance and efficient resource utilization.This query is used to understand the types and status of policies available for an AWS organization. This can be beneficial for auditing purposes, ensuring policy compliance across all accounts within the organization.

select
  organization_id,
  policy ->> 'Type' as policy_type,
  policy ->> 'Status' as policy_status
from
  aws_account
  cross join jsonb_array_elements(organization_available_policy_types) as policy;
select
  organization_id,
  json_extract(policy.value, '$.Type') as policy_type,
  json_extract(policy.value, '$.Status') as policy_status
from
  aws_account,
  json_each(organization_available_policy_types) as policy;
title description
Steampipe Table: aws_account_alternate_contact - Query AWS Account Alternate Contact using SQL
Allows users to query AWS Account Alternate Contact to fetch details about the alternate contacts associated with an AWS account.

Table: aws_account_alternate_contact - Query AWS Account Alternate Contact using SQL

The AWS Account Alternate Contact is a feature that allows you to designate additional contacts for your AWS account. These contacts can be specified for different types of communication such as billing, operations, or security, providing an extra layer of management and oversight. It's an effective way to ensure important account-related information is received by the right people in your organization.

Table Usage Guide

The aws_account_alternate_contact table in Steampipe provides you with information about the alternate contacts associated with your AWS account. You can use this table to query alternate contact-specific details, including the contact type, name, title, email, and phone number if you're a DevOps engineer or an AWS administrator. You can use this table to gather insights on alternate contacts, such as their role in the organization, their contact information, and more. The schema outlines the various attributes of your AWS Account Alternate Contact, including the account id, contact type, name, title, email, and phone number.

Important Notes This table supports the optional list key column linked_account_id, which comes with the following requirements:

Examples

Basic info

Discover the segments that are linked to specific AWS accounts and the type of contact associated with them. This can be useful in understanding the communication channels and roles involved in managing these accounts.

select
  name,
  linked_account_id,
  contact_type,
  email_address,
  phone_number,
  contact_title
from
  aws_account_alternate_contact;
select
  name,
  linked_account_id,
  contact_type,
  email_address,
  phone_number,
  contact_title
from
  aws_account_alternate_contact;

Get billing alternate contact details

Discover the segments that contain alternate contact details specifically for billing purposes. This can be useful in instances where you need to directly reach out to the responsible parties for billing inquiries or issues.

select
  name,
  linked_account_id,
  contact_type,
  email_address,
  phone_number,
  contact_title
from
  aws_account_alternate_contact
where
  contact_type = 'BILLING';
select
  name,
  linked_account_id,
  contact_type,
  email_address,
  phone_number,
  contact_title
from
  aws_account_alternate_contact
where
  contact_type = 'BILLING';

Get alternate contact details for an account in the organization (using credentials from the management account)

Discover the alternate contact details for a specific account within your organization using information from the management account. This is useful for ensuring communication channels are updated and accurate.

select
  name,
  linked_account_id,
  contact_type,
  email_address,
  phone_number,
  contact_title
from
  aws_account_alternate_contact
where
  linked_account_id = '123456789012';
select
  name,
  linked_account_id,
  contact_type,
  email_address,
  phone_number,
  contact_title
from
  aws_account_alternate_contact
where
  linked_account_id = '123456789012';

Get operations alternate contact details for an account in the organization (using credentials from the management account)

This query is useful for identifying the alternate contact details related to security for a specific account within an organization. It allows for efficient monitoring and communication in case of any security-related issues or concerns.

select
  name,
  linked_account_id,
  contact_type,
  email_address,
  phone_number,
  contact_title
from
  aws_account_alternate_contact
where
  linked_account_id = '123456789012'
  and contact_type = 'SECURITY';
select
  name,
  linked_account_id,
  contact_type,
  email_address,
  phone_number,
  contact_title
from
  aws_account_alternate_contact
where
  linked_account_id = '123456789012'
  and contact_type = 'SECURITY';
title description
Steampipe Table: aws_account_contact - Query AWS Account Contact using SQL
Allows users to query AWS Account Contact details, including email, mobile, and address information associated with an AWS account.

Table: aws_account_contact - Query AWS Account Contact using SQL

The AWS Account Contact is a resource that stores contact information associated with an AWS account. This information can include the account holder's name, email address, and phone number. It is essential for communication purposes, especially for receiving important notifications and alerts related to the AWS services and resources.

Table Usage Guide

The aws_account_contact table in Steampipe provides you with information about contact details associated with an AWS account. This table allows you, as a DevOps engineer, to query contact-specific details, including email, mobile, and address information. You can utilize this table to gather insights on AWS account contact details, such as verification of contact information, understanding the geographical distribution of accounts, and more. The schema outlines the various attributes of the AWS account contact for you, including the account ID, address, email, fax, and phone number.

Important Notes This table supports the optional list key column linked_account_id, with the following requirements:

  • The caller must be an identity in the organization's management account or a delegated administrator account.
  • The specified account ID must also be a member account in the same organization.
  • The organization must have all features enabled.
  • The organization must have trusted access enabled for the Account Management service.
  • If using AWS' ReadOnlyAccess policy, this policy does not include the account:GetContactInformation permission, so you will need to add it to use this table.

Examples

Basic info

This query allows you to explore the basic contact information linked to your AWS account. The practical application of this query is to quickly identify and review your account details, ensuring they're accurate and up-to-date.

select
  full_name,
  company_name,
  city,
  phone_number,
  postal_code,
  state_or_region,
  website_url
from
  aws_account_contact;
select
  full_name,
  company_name,
  city,
  phone_number,
  postal_code,
  state_or_region,
  website_url
from
  aws_account_contact;

Get contact details for an account in the organization (using credentials from the management account)

Gain insights into the contact information associated with a specific account in your organization. This can be particularly useful for administrators who need to communicate with account holders or verify account details.

select
  full_name,
  company_name,
  city,
  phone_number,
  postal_code,
  state_or_region,
  website_url
from
  aws_account_contact
where
  linked_account_id = '123456789012';
select
  full_name,
  company_name,
  city,
  phone_number,
  postal_code,
  state_or_region,
  website_url
from
  aws_account_contact
where
  linked_account_id = '123456789012';
title description
Steampipe Table: aws_acm_certificate - Query AWS Certificate Manager certificates using SQL
Allows users to query AWS Certificate Manager certificates. This table provides information about each certificate, including the domain name, status, issuer, and more. It can be used to monitor certificate details, validity, and expiration data.

Table: aws_acm_certificate - Query AWS Certificate Manager certificates using SQL

The AWS Certificate Manager (ACM) is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet as well as resources on private networks. AWS Certificate Manager removes the time-consuming manual process of purchasing, uploading, and renewing SSL/TLS certificates.

Table Usage Guide

The aws_acm_certificate table in Steampipe provides you with information about certificates within AWS Certificate Manager (ACM). This table allows you, as a DevOps engineer, to query certificate-specific details, including domain name, status, issuer, and expiration data. You can utilize this table to gather insights on certificates, such as certificate status, verification of issuer, and more. The schema outlines the various attributes of the ACM certificate for you, including the certificate ARN, creation date, domain name, and associated tags.

Examples

Basic info

Analyze the settings to understand the status and usage of your AWS Certificate Manager (ACM) certificates. This can help identify any issues with certificates, such as failure reasons, and see which domains they're associated with, aiding in efficient resource management and troubleshooting.

select
  certificate_arn,
  domain_name,
  failure_reason,
  in_use_by,
  status,
  key_algorithm
from
  aws_acm_certificate;
select
  certificate_arn,
  domain_name,
  failure_reason,
  in_use_by,
  status,
  key_algorithm
from
  aws_acm_certificate;

List of expired certificates

Identify instances where your AWS certificates have expired. This allows you to maintain security by promptly replacing or renewing these certificates.

select
  certificate_arn,
  domain_name,
  status
from
  aws_acm_certificate
where
  status = 'EXPIRED';
select
  certificate_arn,
  domain_name,
  status
from
  aws_acm_certificate
where
  status = 'EXPIRED';

List certificates for which transparency logging is disabled

Discover the segments with disabled transparency logging in certificate settings to enhance security and compliance efforts. This allows for proactive mitigation of potential risks associated with non-transparent logging.

select
  certificate_arn,
  domain_name,
  status
from
  aws_acm_certificate
where
  certificate_transparency_logging_preference <> 'ENABLED';
select
  certificate_arn,
  domain_name,
  status
from
  aws_acm_certificate
where
  certificate_transparency_logging_preference != 'ENABLED';

List certificates without application tag key

Identify the certificates that are missing an application tag key. This can help in pinpointing areas where tagging conventions may not have been followed, aiding in better resource management.

select
  certificate_arn,
  tags
from
  aws_acm_certificate
where
  not tags :: JSONB ? 'application';
select
  certificate_arn,
  tags
from
  aws_acm_certificate
where
  json_extract(tags, '$.application') is null;
title description
Steampipe Table: aws_acmpca_certificate_authority - Query AWS ACM PCA Certificate Authorities using SQL
Allows users to query AWS ACM PCA Certificate Authorities. It can be used to monitor certificate authorities details, validity, usage mode and expiration data.

Table: aws_acmpca_certificate_authority - Query AWS ACM PCA Certificate Authorities using SQL

The aws_acmpca_certificate_authority table provides detailed information about AWS Certificate Manager Private Certificate Authority (ACM PCA) certificate authorities. These entities enable you to securely issue and manage your private certificates. This table allows for querying configurations, statuses, key storage standards, and more for each certificate authority within your AWS account.

Table Usage Guide

This table can be utilized to monitor the configuration and operational health of your private certificate authorities managed through AWS ACM PCA. It enables security analysts, compliance auditors, and cloud administrators to assess the certificate authorities' compliance with policies, investigate issuance metadata, and understand the security standards being applied.

Examples

Basic information

Retrieve basic details about your ACM PCA Certificate Authorities.

select
  arn,
  status,
  created_at,
  not_before,
  not_after,
  key_storage_security_standard,
  failure_reason
from
  aws_acmpca_certificate_authority;
select
  arn,
  status,
  datetime(created_at) AS created_at,
  datetime(not_before) AS not_before,
  datetime(not_after) AS not_after,
  key_storage_security_standard,
  failure_reason
from
  aws_acmpca_certificate_authority;

Certificate authorities with specific key storage security standards

List certificate authorities that comply with a specific key storage security standard.

select
  arn,
  status,
  key_storage_security_standard
from
  aws_acmpca_certificate_authority
where
  key_storage_security_standard = 'FIPS_140_2_LEVEL_3_OR_HIGHER';
select
  arn,
  status,
  key_storage_security_standard
from
  aws_acmpca_certificate_authority
where
  key_storage_security_standard = 'FIPS_140_2_LEVEL_3_OR_HIGHER';

Certificate authorities by status

Find certificate authorities by their operational status, e.g., ACTIVE, DISABLED.

select
  arn,
  status,
  created_at,
  last_state_change_at
from
  aws_acmpca_certificate_authority
where
  status = 'ACTIVE';
select
  arn,
  status,
  datetime(created_at) AS created_at,
  datetime(last_state_change_at) AS last_state_change_at
from
  aws_acmpca_certificate_authority
where
  status = 'ACTIVE';

Tagged Certificate Authorities

Identify certificate authorities tagged with specific key-value pairs for organizational purposes.

select
  arn,
  tags
from
  aws_acmpca_certificate_authority
where
  (tags ->> 'Project') = 'MyProject';
select
  arn,
  json_extract(tags, '$.Project') AS project_tag
from
  aws_acmpca_certificate_authority
where
  project_tag = 'MyProject';
title description
Steampipe Table: aws_amplify_app - Query AWS Amplify Apps using SQL
Allows users to query AWS Amplify Apps to retrieve detailed information about each application, including its name, ARN, creation date, default domain, and more.

Table: aws_amplify_app - Query AWS Amplify Apps using SQL

The AWS Amplify App is a part of AWS Amplify, a set of tools and services that enables developers to build secure, scalable, full-stack applications. These applications can be built with integrated backend services like authentication, analytics, and content delivery, with capabilities such as real-time data syncing. AWS Amplify Apps allow for the creation, configuration, and management of continuous deployment workflows for web apps in the AWS Amplify Console.

Table Usage Guide

The aws_amplify_app table in Steampipe provides you with information about apps within AWS Amplify. This table allows you, as a DevOps engineer, to query app-specific details, including the name, ARN, creation date, last update date, default domain, and associated metadata. You can utilize this table to gather insights on Amplify Apps, such as the apps' status, platform, repository, and more. The schema outlines the various attributes of the Amplify App for you, including the app ID, app ARN, platform, repository, production branch, and associated tags.

Examples

Basic info

Explore the fundamental details of your AWS Amplify applications to gain insights into their creation time, platform, and build specifications. This aids in understanding the overall structure and configuration of your applications for better management and optimization.The query provides an overview of applications in AWS Amplify, including their identification details, creation time, and platform information. This can be useful for auditing purposes or to gain a quick understanding of the applications' configuration and status.

select
  app_id,
  name,
  description,
  arn,
  platform,
  create_time,
  build_spec
from
  aws_amplify_app;
select
  app_id,
  name,
  description,
  arn,
  platform,
  create_time,
  build_spec
from
  aws_amplify_app;

List apps created within the last 90 days

Discover the segments that have recently been developed within the past 3 months. This could be beneficial for understanding the evolution of your app portfolio and identifying any new trends or patterns.The query identifies recently created applications on AWS Amplify by filtering those that have been established within the last 90 days. This could be useful for monitoring new app development or tracking changes over a quarterly period.

select
  name,
  app_id,
  create_time
from
  aws_amplify_app
where
  create_time >= (now() - interval '90' day)
order by
  create_time;
select
  name,
  app_id,
  create_time
from
  aws_amplify_app
where
  create_time >= datetime('now', '-90 day')
order by
  create_time;

List apps updated within the last hour

Explore which applications have been updated in the last hour to stay informed about the most recent changes. This is useful for closely monitoring application updates and ensuring they are functioning as expected after each update.The query identifies and organizes applications that have been updated within the past hour. This could be useful for monitoring recent changes to applications, particularly in a large or rapidly evolving system.

select
  name,
  app_id,
  update_time
from
  aws_amplify_app
where
  update_time >= (now() - interval '1' hour)
order by
  update_time;
select
  name,
  app_id,
  update_time
from
  aws_amplify_app
where
  update_time >= datetime('now', '-1 hour')
order by
  update_time;

Describe information about the production branch for an app

This query is used to gain insights into the status of a specific application's production branch. It provides crucial information such as the last deployment time and branch status, which can be useful for monitoring and managing app development and deployment.The query provides a snapshot of the production branch status for a specific application, including when it was last deployed. This is useful for tracking the progress and status of application updates.

select
  production_branch ->> 'BranchName' as branch_name,
  production_branch ->> 'LastDeployTime' as last_deploy_time,
  production_branch ->> 'Status' as status
from
  aws_amplify_app
where
  name = 'amplify_app_name';
select
  json_extract(production_branch, '$.BranchName') as branch_name,
  json_extract(production_branch, '$.LastDeployTime') as last_deploy_time,
  json_extract(production_branch, '$.Status') as status
from
  aws_amplify_app
where
  name = 'amplify_app_name';

List information about the build spec for an app

Explore the build specifications for a specific application in AWS Amplify, including the backend, frontend, test, and environment settings. This can help fine-tune development and testing processes, and ensure optimal environment configurations.The query provides specific details about the construction specifications for both the backend and frontend of an application, including testing protocols and environmental settings. This information can be useful for developers looking to understand the structure and configuration of an app for troubleshooting or replication purposes.

select
  name,
  app_id,
  build_spec ->> 'backend' as build_backend_spec,
  build_spec ->> 'frontend' as build_frontend_spec,
  build_spec ->> 'test' as build_test_spec,
  build_spec ->> 'env' as build_env_settings
from
  aws_amplify_app
where
  name = 'amplify_app_name';
select
  name,
  app_id,
  json_extract(build_spec, '$.backend') as build_backend_spec,
  json_extract(build_spec, '$.frontend') as build_frontend_spec,
  json_extract(build_spec, '$.test') as build_test_spec,
  json_extract(build_spec, '$.env') as build_env_settings
from
  aws_amplify_app
where
  name = 'amplify_app_name';

List information on rewrite(200) redirect settings for an app

This example allows you to identify instances where an app is using rewrite (200) redirect settings. It's particularly useful for understanding the conditions, sources, and targets associated with these redirects, helping to optimize app navigation and user experience.The query provides an overview of the 200 status code redirect settings for a specific application, allowing users to assess and manage how traffic is being redirected within the app. This can be useful for troubleshooting or optimizing application performance.

select
  name,
  redirects_array ->> 'Condition' as country_code,
  redirects_array ->> 'Source' as source_address,
  redirects_array ->> 'Status' as redirect_type,
  redirects_array ->> 'Target' as destination_address
from
  aws_amplify_app,
  jsonb_array_elements(custom_rules) as redirects_array
where
  redirects_array ->> 'Status' = '200'
  and name = 'amplify_app_name';
select
  name,
  json_extract(redirects_array.value, '$.Condition') as country_code,
  json_extract(redirects_array.value, '$.Source') as source_address,
  json_extract(redirects_array.value, '$.Status') as redirect_type,
  json_extract(redirects_array.value, '$.Target') as destination_address
from
  aws_amplify_app,
  json_each(custom_rules) as redirects_array
where
  json_extract(redirects_array.value, '$.Status') = '200'
  and name = 'amplify_app_name';

List all apps that have branch auto build enabled

Determine the areas in which automatic build feature is enabled for applications. This is useful for identifying applications that are set to automatically build and deploy code changes from connected branches, thereby facilitating continuous integration and delivery.The query provides a list of all applications that have automatic branch building enabled. This is useful for developers who want to identify which apps are set to automatically build and deploy when code is pushed to a connected repository, allowing for efficient monitoring and management.

select
  app_id,
  name,
  description,
  arn
from
  aws_amplify_app
where
  enable_branch_auto_build = true;
select
  app_id,
  name,
  description,
  arn
from
  aws_amplify_app
where
  enable_branch_auto_build = 1;
title description
Steampipe Table: aws_api_gateway_api_key - Query AWS API Gateway API Keys using SQL
Allows users to query API Keys in AWS API Gateway. The `aws_api_gateway_api_key` table in Steampipe provides information about API Keys within AWS API Gateway. This table allows DevOps engineers to query API Key-specific details, including its ID, value, enabled status, and associated metadata. Users can utilize this table to gather insights on API Keys, such as keys that are enabled, keys associated with specific stages, and more. The schema outlines the various attributes of the API Key, including the key ID, creation date, enabled status, and associated tags.

Table: aws_api_gateway_api_key - Query AWS API Gateway API Keys using SQL

AWS API Gateway API Keys are used to control and track API usage in Amazon API Gateway. They are associated with API stages to manage access and can be used in conjunction with usage plans to authorize access to specific APIs. API keys are not meant for client-side security, but rather for tracking and controlling how your customers use your API.

Table Usage Guide

The aws_api_gateway_api_key table in Steampipe provides you with information about API Keys within AWS API Gateway. This table allows you, as a DevOps engineer, to query API Key-specific details, including its ID, value, enabled status, and associated metadata. You can utilize this table to gather insights on API Keys, such as keys that are enabled, keys associated with specific stages, and more. The schema outlines the various attributes of the API Key for you, including the key ID, creation date, enabled status, and associated tags.

Examples

API gateway API key basic info

Discover the segments that utilize the API gateway key within the AWS infrastructure. This query can provide insights into the status and usage of API keys, which can be beneficial for monitoring security and optimizing resource utilization.

select
  name,
  id,
  enabled,
  created_date,
  last_updated_date,
  customer_id,
  stage_keys
from
  aws_api_gateway_api_key;
select
  name,
  id,
  enabled,
  created_date,
  last_updated_date,
  customer_id,
  stage_keys
from
  aws_api_gateway_api_key;

List of API keys which are not enabled

Determine the areas in which API keys are not activated to assess potential security risks or unused resources within your AWS API Gateway.

select
  name,
  id,
  customer_id
from
  aws_api_gateway_api_key
where
  not enabled;
select
  name,
  id,
  customer_id
from
  aws_api_gateway_api_key
where
  enabled = 0;
title description
Steampipe Table: aws_api_gateway_authorizer - Query AWS API Gateway Authorizer using SQL
Allows users to query AWS API Gateway Authorizer and access data about API Gateway Authorizers in an AWS account. This data includes the authorizer's ID, name, type, provider ARNs, and other configuration details.

Table: aws_api_gateway_authorizer - Query AWS API Gateway Authorizer using SQL

The AWS API Gateway Authorizer is a crucial component in Amazon API Gateway that validates incoming requests before they reach the backend systems. It verifies the caller's identity and checks if the caller has permission to execute the requested operation. This feature enhances the security of your APIs by preventing unauthorized access to your resources.

Table Usage Guide

The aws_api_gateway_api_authorizer table in Steampipe provides you with information about API Gateway Authorizers within AWS API Gateway. This table allows you, as a DevOps engineer, to query authorizer-specific details, including the authorizer's ID, name, type, provider ARNs, and other configuration details. You can utilize this table to gather insights on authorizers, such as the authorizer's type, the ARN of the authorizer's provider, and more. The schema outlines the various attributes of the API Gateway Authorizer for you, including the authorizer's ID, name, type, provider ARNs, and associated metadata.

Examples

API gateway API authorizer basic info

Explore the core details of an API gateway's authorizer configuration, such as its ID, name, and authorization type. This can help you understand the security measures in place for your API gateway and can be useful for auditing purposes.

select
  id,
  name,
  rest_api_id,
  auth_type,
  authorizer_credentials,
  identity_validation_expression,
  identity_source
from
  aws_api_gateway_authorizer;
select
  id,
  name,
  rest_api_id,
  auth_type,
  authorizer_credentials,
  identity_validation_expression,
  identity_source
from
  aws_api_gateway_authorizer;

List the API authorizers that uses cognito user pool to authorize API calls

Explore which API authorizers are utilizing Cognito user pools for API call authorization. This can help in assessing the security configuration of your APIs and identify any potential areas for improvement.

select
  id,
  name,
  rest_api_id,
  auth_type
from
  aws_api_gateway_authorizer
where
  auth_type = 'cognito_user_pools';
select
  id,
  name,
  rest_api_id,
  auth_type
from
  aws_api_gateway_authorizer
where
  auth_type = 'cognito_user_pools';
title description
Steampipe Table: aws_api_gateway_domain_name - Query AWS API Gateway Domain Names using SQL
Allows users to query AWS API Gateway Domain Names and retrieve details about each domain's configuration, certificate, and associated API.

Table: aws_api_gateway_domain_name - Query AWS API Gateway Domain Names using SQL

The AWS API Gateway Domain Name is a component of Amazon's API Gateway service that allows you to create, configure, and manage a custom domain name to maintain a consistent user experience. It enables routing of incoming requests to various backend services, including AWS Lambda functions, and provides features like SSL certificates for secure communication. This is crucial for providing a seamless and secure API communication channel for your applications.

Table Usage Guide

The aws_api_gateway_domain_name table in Steampipe provides you with information about domain names within AWS API Gateway. This table allows you, as a DevOps engineer, to query domain-specific details, including the domain name, certificate details, and the associated API. You can utilize this table to gather insights on domains, such as the domain's endpoint configuration, the type of certificate used, and the API it's associated with. The schema outlines the various attributes of the domain name for you, including the domain name, certificate upload date, certificate ARN, and endpoint configuration.

Examples

Basic info

Determine the areas in which your API Gateway domain name configurations are operating in AWS. This can help you understand the status and ownership of your domain names, providing insights into their distribution and certificate details.

select
  domain_name,
  certificate_arn,
  distribution_domain_name,
  distribution_hosted_zone_id,
  domain_name_status,
  ownership_verification_certificate_arn
from
  aws_api_gateway_domain_name;
select
  domain_name,
  certificate_arn,
  distribution_domain_name,
  distribution_hosted_zone_id,
  domain_name_status,
  ownership_verification_certificate_arn
from
  aws_api_gateway_domain_name;

List available domain names

Determine the areas in which domain names are available for use in the AWS API Gateway. This is beneficial for identifying potential new domains for your applications.

select
  domain_name,
  certificate_arn,
  certificate_upload_date,
  regional_certificate_arn,
  domain_name_status
from
  aws_api_gateway_domain_name
where
  domain_name_status = 'AVAILABLE';
select
  domain_name,
  certificate_arn,
  certificate_upload_date,
  regional_certificate_arn,
  domain_name_status
from
  aws_api_gateway_domain_name
where
  domain_name_status = 'AVAILABLE';

Get certificate details of each domain name

Discover the segments that provide detailed insights about the certificates associated with each domain name. This is useful in understanding the security measures in place and their configurations, aiding in better management of your web assets.

select
  d.domain_name,
  d.regional_certificate_arn,
  c.certificate,
  c.certificate_transparency_logging_preference,
  c.created_at,
  c.imported_at,
  c.issuer,
  c.issued_at,
  c.key_algorithm
from
  aws_api_gateway_domain_name as d,
  aws_acm_certificate as c
where
  c.certificate_arn = d.regional_certificate_arn;
select
  d.domain_name,
  d.regional_certificate_arn,
  c.certificate,
  c.certificate_transparency_logging_preference,
  c.created_at,
  c.imported_at,
  c.issuer,
  c.issued_at,
  c.key_algorithm
from
  aws_api_gateway_domain_name as d,
  aws_acm_certificate as c
where
  c.certificate_arn = d.regional_certificate_arn;

Get endpoint configuration details of each domain

Determine the configuration details of each domain in your AWS API Gateway to better understand the types of endpoints used and identify any associated Virtual Private Cloud (VPC) endpoints.

select
  domain_name,
  endpoint_configuration -> 'Types' as endpoint_types,
  endpoint_configuration -> 'VpcEndpointIds' as vpc_endpoint_ids
from
  aws_api_gateway_domain_name;
select
  domain_name,
  json_extract(endpoint_configuration, '$.Types') as endpoint_types,
  json_extract(endpoint_configuration, '$.VpcEndpointIds') as vpc_endpoint_ids
from
  aws_api_gateway_domain_name;

Get mutual TLS authentication configuration of each domain name

This query can be used to analyze the mutual TLS authentication settings for each domain name in an AWS API Gateway. It provides insights into the truststore details, which can be beneficial for improving security configurations and troubleshooting potential issues.

select
  domain_name,
  mutual_tls_authentication ->> 'TruststoreUri' as truststore_uri,
  mutual_tls_authentication ->> 'TruststoreVersion' as truststore_version,
  mutual_tls_authentication ->> 'TruststoreWarnings' as truststore_warnings
from
  aws_api_gateway_domain_name;
select
  domain_name,
  json_extract(mutual_tls_authentication, '$.TruststoreUri') as truststore_uri,
  json_extract(mutual_tls_authentication, '$.TruststoreVersion') as truststore_version,
  json_extract(mutual_tls_authentication, '$.TruststoreWarnings') as truststore_warnings
from
  aws_api_gateway_domain_name;
title description
Steampipe Table: aws_api_gateway_method - Query AWS API Gateway Methods using SQL
Represents a client-facing interface by which the client calls the API to access back-end resources. A Method resource is integrated with an Integration resource. Both consist of a request and one or more responses. The method request takes the client input that is passed to the back end through the integration request. A method response returns the output from the back end to the client through an integration response. A method request is embodied in a Method resource, whereas an integration request is embodied in an Integration resource. On the other hand, a method response is represented by a MethodResponse resource, whereas an integration response is represented by an IntegrationResponse resource.

Table: aws_api_gateway_method - Query AWS API Gateway Methods using SQL

Represents a client-facing interface by which the client calls the API to access back-end resources. A Method resource is integrated with an Integration resource. Both consist of a request and one or more responses. The method request takes the client input that is passed to the back end through the integration request. A method response returns the output from the back end to the client through an integration response. A method request is embodied in a Method resource, whereas an integration request is embodied in an Integration resource. On the other hand, a method response is represented by a MethodResponse resource, whereas an integration response is represented by an IntegrationResponse resource.

Table Usage Guide

The aws_api_gateway_method table in Steampipe allows users to query information about AWS API Gateway Methods. These methods represent client-facing interfaces for accessing back-end resources. Users can retrieve details such as the REST API ID, resource ID, HTTP method, path, and whether API key authorization is required. Additionally, users can query methods with specific criteria, such as HTTP method type or authorization type.

Examples

Basic info

Retrieve basic information about AWS API Gateway Methods, including the REST API ID, resource ID, HTTP method, path, and whether API key authorization is required. This query provides an overview of the methods in your AWS API Gateway.

select
  rest_api_id,
  resource_id,
  http_method,
  path,
  api_key_required
from
  aws_api_gateway_method;
select
  rest_api_id,
  resource_id,
  http_method,
  path,
  api_key_required
from
  aws_api_gateway_method;

List API Gateway GET methods

Identify AWS API Gateway Methods that use the HTTP GET method. This query helps you filter and view specific types of methods in your API Gateway.

select
  rest_api_id,
  resource_id,
  http_method,
  operation_name
from
  aws_api_gateway_method
where
  http_method = 'GET';
select
  rest_api_id,
  resource_id,
  http_method,
  operation_name
from
  aws_api_gateway_method
where
  http_method = 'GET';

List methods with open access

Retrieve AWS API Gateway Methods that do not require any authorization. This query helps you identify methods with open access settings.

select
  rest_api_id,
  resource_id,
  http_method,
  path,
  authorization_type,
  authorizer_id
from
  aws_api_gateway_method
where
  authorization_type = 'none';
select
  rest_api_id,
  resource_id,
  http_method,
  path,
  authorization_type,
  authorizer_id
from
  aws_api_gateway_method
where
  authorization_type = 'none';

Get integration details of methods

Retrieve detailed integration configuration information for AWS API Gateway Methods. This query includes information such as cache key parameters, cache namespace, connection ID, connection type, content handling, credentials, HTTP method, passthrough behavior, request parameters, request templates, timeout in milliseconds, TLS configuration, integration type, URI, and integration responses.

select
  rest_api_id,
  resource_id,
  http_method,
  method_integration -> 'CacheKeyParameters' as cache_key_parameters,
  method_integration ->> 'CacheNamespace' as cache_namespace,
  method_integration ->> 'ConnectionId' as connection_id,
  method_integration ->> 'ConnectionType' as connection_type,
  method_integration ->> 'ContentHandling' as content_handling,
  method_integration ->> 'Credentials' as credentials,
  method_integration ->> 'HttpMethod' as http_method,
  method_integration ->> 'PassthroughBehavior' as passthrough_behavior,
  method_integration ->> 'RequestParameters' as request_parameters,
  method_integration -> 'RequestTemplates' as request_templates,
  method_integration ->> 'TimeoutInMillis' as timeout_in_millis,
  method_integration ->> 'tls_config' as tls_config,
  method_integration ->> 'Type' as type,
  method_integration ->> 'Uri' as uri,
  method_integration -> 'IntegrationResponses' as integration_responses
from
  aws_api_gateway_method;
select
  rest_api_id,
  resource_id,
  http_method,
  json_extract(method_integration, '$.CacheKeyParameters') as cache_key_parameters,
  json_extract(method_integration, '$.CacheNamespace') as cache_namespace,
  json_extract(method_integration, '$.ConnectionId') as connection_id,
  json_extract(method_integration, '$.ConnectionType') as connection_type,
  json_extract(method_integration, '$.ContentHandling') as content_handling,
  json_extract(method_integration, '$.Credentials') as credentials,
  json_extract(method_integration, '$.HttpMethod') as http_method,
  json_extract(method_integration, '$.PassthroughBehavior') as passthrough_behavior,
  json_extract(method_integration, '$.RequestParameters') as request_parameters,
  json_extract(method_integration, '$.RequestTemplates') as request_templates,
  json_extract(method_integration, '$.TimeoutInMillis') as timeout_in_millis,
  json_extract(method_integration, '$.tls_config') as tls_config,
  json_extract(method_integration, '$.Type') as type,
  json_extract(method_integration, '$.Uri') as uri,
  json_extract(method_integration, '$.IntegrationResponses') as integration_responses
from
  aws_api_gateway_method;
title description
Steampipe Table: aws_api_gateway_rest_api - Query AWS API Gateway Rest APIs using SQL
Allows users to query AWS API Gateway Rest APIs to retrieve information about API Gateway REST APIs in an AWS account.

Table: aws_api_gateway_rest_api - Query AWS API Gateway Rest APIs using SQL

The AWS API Gateway Rest API is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. These APIs act as the "front door" for applications to access data, business logic, or functionality from your backend services. They can be used to enable real-time two-way communication (WebSocket APIs), or create, deploy, and manage HTTP and REST APIs (RESTful APIs).

Table Usage Guide

The aws_api_gateway_rest_api table in Steampipe provides you with information about API Gateway REST APIs within AWS API Gateway. This table allows you, as a DevOps engineer, to query REST API-specific details, including the API's name, description, id, and created date. You can utilize this table to gather insights on APIs, such as their deployment status, endpoint configurations, and more. The schema outlines the various attributes of the API Gateway REST API for you, including the API's ARN, created date, endpoint configuration, and associated tags.

Examples

API gateway rest API basic info

Explore the basic configuration details of your API Gateway's REST APIs to understand aspects like the source of API keys and compression settings. This can be particularly useful in managing and optimizing your APIs for better performance and security.

select
  name,
  api_id,
  api_key_source,
  minimum_compression_size,
  binary_media_types
from
  aws_api_gateway_rest_api;
select
  name,
  api_id,
  api_key_source,
  minimum_compression_size,
  binary_media_types
from
  aws_api_gateway_rest_api;

List all the rest APIs that have content encoding disabled

Determine the areas in which REST APIs do not have content encoding enabled, to identify potential performance improvements.

select
  name,
  api_id,
  api_key_source,
  minimum_compression_size
from
  aws_api_gateway_rest_api
where
  minimum_compression_size is null;
select
  name,
  api_id,
  api_key_source,
  minimum_compression_size
from
  aws_api_gateway_rest_api
where
  minimum_compression_size is null;

List all the APIs which are not configured to private endpoint

Determine the areas in which the APIs are publicly accessible, allowing you to assess potential security risks and implement necessary changes to enhance data protection.

select
  name,
  api_id,
  api_key_source,
  endpoint_configuration_types,
  endpoint_configuration_vpc_endpoint_ids
from
  aws_api_gateway_rest_api
where
  not endpoint_configuration_types ? 'PRIVATE';
select
  name,
  api_id,
  api_key_source,
  endpoint_configuration_types,
  endpoint_configuration_vpc_endpoint_ids
from
  aws_api_gateway_rest_api
where
  json_extract(endpoint_configuration_types, '$[0]') != 'PRIVATE';

List of APIs policy statements that grant external access

Determine the areas in which your API's policy statements are granting access to external entities. This is useful to identify potential security risks and ensure that your API's access control is as intended.

select
  name,
  p as principal,
  a as action,
  s ->> 'Effect' as effect,
  s -> 'Condition' as conditions
from
  aws_api_gateway_rest_api,
  jsonb_array_elements(policy_std -> 'Statement') as s,
  jsonb_array_elements_text(s -> 'Principal' -> 'AWS') as p,
  string_to_array(p, ':') as pa,
  jsonb_array_elements_text(s -> 'Action') as a
where
  s ->> 'Effect' = 'Allow'
  and (
    pa [5] != account_id
    or p = '*'
  );
Error: SQLite does not support the split or string_to_array functions.

API policy statements that grant anonymous access

Identify instances where API policy statements are granting access to anonymous users. This is crucial for maintaining the security of your API by preventing unauthorized access.

select
  title,
  p as principal,
  a as action,
  s ->> 'Effect' as effect,
  s -> 'Condition' as conditions
from
  aws_api_gateway_rest_api,
  jsonb_array_elements(policy_std -> 'Statement') as s,
  jsonb_array_elements_text(s -> 'Principal' -> 'AWS') as p,
  jsonb_array_elements_text(s -> 'Action') as a
where
  p = '*'
  and s ->> 'Effect' = 'Allow';
select
  title,
  json_extract(principal.value, '$') as p,
  json_extract(action.value, '$') as a,
  json_extract(effect.value, '$') as effect,
  conditions.value as conditions
from
  aws_api_gateway_rest_api,
  json_each(json_extract(policy_std, '$.Statement')) as s,
  json_each(json_extract(s.value, '$.Principal.AWS')) as principal,
  json_each(json_extract(s.value, '$.Action')) as action,
  json_tree(s.value, '$.Effect') as effect,
  json_tree(s.value, '$.Condition') as conditions
where
  json_extract(principal.value, '$') = '*'
  and json_extract(effect.value, '$') = 'Allow';
title description
Steampipe Table: aws_api_gateway_stage - Query AWS API Gateway Stages using SQL
Allows users to query AWS API Gateway Stages for information related to deployment, API, and stage details.

Table: aws_api_gateway_stage - Query AWS API Gateway Stages using SQL

The AWS API Gateway Stages are crucial parts of the API Gateway service that help manage and control the lifecycle of an API. Stages are named references to a specific deployment of an API and associated settings. They enable API call traffic management, throttling, access permissions, and enable or disable API Gateway caching.

Table Usage Guide

The aws_api_gateway_stage table in Steampipe provides you with information about stages within AWS API Gateway. This table allows you, as a DevOps engineer, to query stage-specific details, including the associated deployment, API, stage description, and associated metadata. You can utilize this table to gather insights on stages, such as the stage's deployment ID, the associated API, stage settings, and more. The schema outlines the various attributes of the API Gateway stage for you, including the stage name, deployment ID, API ID, created date, and associated tags.

Examples

Count of stages per rest APIs

Determine the distribution of stages across different REST APIs to understand the complexity and structure of your API Gateway. This could aid in optimizing the management and deployment of your APIs.This query is used to determine the number of stages for each REST API in a system. This can be useful for understanding the distribution of stages across APIs, which can aid in managing and optimizing API performance.

select
  rest_api_id,
  count(name) stage_count
from
  aws_api_gateway_stage
group by
  rest_api_id;
select
  rest_api_id,
  count(name) as stage_count
from
  aws_api_gateway_stage
group by
  rest_api_id;

List of stages where API caching is enabled

Identify the stages in your API Gateway where caching is enabled. This could be useful for optimizing performance and reducing latency in your application.This query is used to identify stages in the AWS API Gateway where caching is enabled. This is useful for optimizing performance and reducing latency by avoiding unnecessary calls to the backend.

select
  name,
  rest_api_id,
  cache_cluster_enabled,
  cache_cluster_size
from
  aws_api_gateway_stage
where
  cache_cluster_enabled;
select
  name,
  rest_api_id,
  cache_cluster_enabled,
  cache_cluster_size
from
  aws_api_gateway_stage
where
  cache_cluster_enabled = 1;

List web ACLs associated with the gateway stages

Assess the elements within your network by identifying the web access control lists (ACLs) associated with various stages of your gateway. This aids in understanding your security configuration and ensuring the correct ACLs are in place.This example shows how to identify the web access control lists (ACLs) associated with each stage of your API Gateway. This could be useful for auditing security settings or troubleshooting access issues.

select
  name,
  split_part(web_acl_arn, '/', 3) as web_acl_name
from
  aws_api_gateway_stage;
select
  name,
  substr(web_acl_arn, instr(web_acl_arn, '/') + 1, instr(substr(web_acl_arn, instr(web_acl_arn, '/') + 1), '/') - 1) as web_acl_name
from
  aws_api_gateway_stage;

List stages with CloudWatch logging disabled

This query is used to identify the stages in your AWS API Gateway that don't have CloudWatch logging enabled. It's useful for improving your system's security and troubleshooting capabilities by ensuring all stages are properly logging activity.This query is used to identify stages in AWS API Gateway where CloudWatch logging is turned off. It's useful for ensuring all stages are properly monitored and adhering to logging best practices.

select
  deployment_id,
  name,
  tracing_enabled,
  method_settings -> '*/*' ->> 'LoggingLevel' as cloudwatch_log_level
from
  aws_api_gateway_stage
where
  method_settings -> '*/*' ->> 'LoggingLevel' = 'OFF';
select
  deployment_id,
  name,
  tracing_enabled,
  json_extract(method_settings, '$."*/*".LoggingLevel') as cloudwatch_log_level
from
  aws_api_gateway_stage
where
  json_extract(method_settings, '$."*/*".LoggingLevel') = 'OFF';
title description
Steampipe Table: aws_api_gateway_usage_plan - Query AWS API Gateway Usage Plans using SQL
Allows users to query AWS API Gateway Usage Plans in order to retrieve information about the usage plans configured in the AWS API Gateway service.

Table: aws_api_gateway_usage_plan - Query AWS API Gateway Usage Plans using SQL

The AWS API Gateway Usage Plans are a feature of Amazon API Gateway that allows developers to manage and restrict the usage of their APIs. These plans can be associated with API keys to enable cost recovery, as well as to control the usage of APIs by third-party developers. This ensures a smooth and controlled distribution of your APIs, protecting them from misuse and overuse.

Table Usage Guide

The aws_api_gateway_usage_plan table in Steampipe provides you with information about usage plans within AWS API Gateway. This table allows you, as a DevOps engineer, to query usage plan specific details, including associated API stages, throttle and quota limits, and associated metadata. You can utilize this table to gather insights on usage plans, such as plans with specific rate limits, the number of requests your clients can make per a given period, and more. The schema outlines the various attributes of the usage plan, including the plan ID, name, description, associated API keys, and associated tags for you.

Examples

Basic info

Explore the various usage plans associated with your AWS API Gateway. This can help you better manage and monitor your API usage, ensuring optimal performance and cost-effectiveness.

select
  name,
  id,
  product_code,
  description,
  api_stages
from
  aws_api_gateway_usage_plan;
select
  name,
  id,
  product_code,
  description,
  api_stages
from
  aws_api_gateway_usage_plan;

List the API gateway usage plans where quota ( i.e the number of api call a user can make within a time period) is disabled

Identify instances where the API gateway usage plans do not have a set quota, which indicates that there is no limit to the number of API calls a user can make within a certain time period. This might be useful in understanding potential areas of vulnerability or overuse in your system.

select
  name,
  id,
  quota
from
  aws_api_gateway_usage_plan
where
  quota is null;
select
  name,
  id,
  quota
from
  aws_api_gateway_usage_plan
where
  quota is null;

List the API gateway usage plan where throttle ( i.e the rate at which user can make request ) is disabled

Determine the areas in which the API gateway usage plan lacks a throttle feature, indicating that there are no restrictions on user request rates.

select
  name,
  id,
  throttle
from
  aws_api_gateway_usage_plan
where
  throttle is null;
select
  name,
  id,
  throttle
from
  aws_api_gateway_usage_plan
where
  throttle is null;
title description
Steampipe Table: aws_api_gatewayv2_api - Query AWS API Gateway using SQL
Allows users to query API Gateway APIs and retrieve detailed information about each API, including its ID, name, protocol type, and more.

Table: aws_api_gatewayv2_api - Query AWS API Gateway using SQL

The AWS API Gateway is a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure APIs at any scale. It handles all the tasks involved in accepting and processing up to hundreds of thousands of concurrent API calls, including traffic management, authorization and access control, monitoring, and API version management. With the use of SQL, you can query and manage your API Gateway effectively.

Table Usage Guide

The aws_api_gatewayv2_api table in Steampipe provides you with information about APIs within AWS API Gateway. This table allows you, as a DevOps engineer, to query API-specific details, including the API ID, name, protocol type, route selection expression, and associated tags. You can utilize this table to gather insights on APIs, such as their configuration details, associated resources, and more. The schema outlines the various attributes of the API for you, including the API key selection expression, CORS configuration, created date, and description.

Examples

Basic info

Explore the configuration of your AWS API Gateway to gain insights into its protocol type and endpoint. This allows for a better understanding of how your API is set up and can assist in troubleshooting or optimizing API performance."Explore the essential details of your AWS API Gateway configurations to understand the protocols used and how routes and keys are selected. This information can aid in optimizing your API setup and troubleshooting issues."

select
  name,
  api_id,
  api_endpoint,
  protocol_type,
  api_key_selection_expression,
  route_selection_expression
from
  aws_api_gatewayv2_api;
select
  name,
  api_id,
  api_endpoint,
  protocol_type,
  api_key_selection_expression,
  route_selection_expression
from
  aws_api_gatewayv2_api;

List APIs with protocol type WEBSOCKET

Uncover the details of APIs that are using the WebSocket protocol. This can be useful for identifying which APIs may need specific handling or monitoring due to their protocol type."Identify instances where AWS APIs are using the WebSocket protocol. This allows you to understand which APIs are designed for real-time, two-way interactive communication."

select
  name,
  api_id,
  protocol_type
from
  aws_api_gatewayv2_api
where
  protocol_type = 'WEBSOCKET';
select
  name,
  api_id,
  protocol_type
from
  aws_api_gatewayv2_api
where
  protocol_type = 'WEBSOCKET';

List APIs with default endpoint enabled

Determine the areas in which APIs are operating with the default endpoint enabled. This can be particularly useful for identifying potential security risks and ensuring best practices in endpoint configuration."Identify all APIs in your AWS environment where the default endpoint is enabled. This can be useful to ensure that no unnecessary endpoints are open, potentially reducing the risk of security breaches."

select
  name,
  api_id,
  api_endpoint
from
  aws_api_gatewayv2_api
where
  not disable_execute_api_endpoint;
select
  name,
  api_id,
  api_endpoint
from
  aws_api_gatewayv2_api
where
  disable_execute_api_endpoint = 0;
title description
Steampipe Table: aws_api_gatewayv2_domain_name - Query AWS API Gateway Domain Names using SQL
Allows users to query AWS API Gateway Domain Names and provides information about each domain name within the AWS API Gateway Service. This table can be used to query domain name details, including associated API mappings, security policy, and associated tags.

Table: aws_api_gatewayv2_domain_name - Query AWS API Gateway Domain Names using SQL

The AWS API Gateway Domain Name is a component of Amazon API Gateway that you associate with a DNS hostname. It's utilized to provide a custom domain for an API that you deploy through the service. The custom domain name can be used to route requests to the API, providing a more user-friendly URL for your API endpoints.

Table Usage Guide

The aws_api_gatewayv2_domain_name table in Steampipe provides you with information about each domain name within the AWS API Gateway Service. This table allows you to query domain name details, including associated API mappings, security policy, and associated tags. The schema outlines the various attributes of the domain name for you, including the domain name ARN, domain name, endpoint type, and associated tags.

Examples

Basic info

Explore the security and metadata aspects of your AWS API Gateway domain names. This query is useful to gain insights into the mutual TLS authentication status, associated tags, title, and alternative names of your domain names, crucial for maintaining secure and organized API management.Analyze the settings to understand the security measures and metadata associated with different domains in your AWS API Gateway. This query can help you assess the use of mutual TLS authentication and keep track of domains through their tags, titles, and alternate names.

select
  domain_name,
  mutual_tls_authentication,
  tags,
  title,
  akas
from
  aws_api_gatewayv2_domain_name;
select
  domain_name,
  mutual_tls_authentication,
  tags,
  title,
  akas
from
  aws_api_gatewayv2_domain_name;

List of all edge endpoint type domain name

Identify instances where the endpoint type of a domain name in AWS API Gateway is 'EDGE'. This query is useful in understanding and managing your API Gateway configurations, especially when dealing with edge-optimized API setups.Analyze the settings to understand the distribution of edge endpoint types within your AWS API Gateway domain names. This can help optimize your API's performance by identifying areas that may benefit from a different endpoint type.

select
  domain_name,
  config ->> 'EndpointType' as endpoint_type
from
  aws_api_gatewayv2_domain_name
  cross join jsonb_array_elements(domain_name_configurations) as config
where
  config ->> 'EndpointType' = 'EDGE';
select
  domain_name,
  json_extract(config.value, '$.EndpointType') as endpoint_type
from
  aws_api_gatewayv2_domain_name,
  json_each(domain_name_configurations) as config
where
  json_extract(config, '$.EndpointType') = 'EDGE';

API gatewayv2 domain name configuration info

Determine the configuration details of your API Gateway's domain name to understand its security policy, certificate details, and status. This information can be useful when troubleshooting issues or assessing the security posture of your API Gateway."Explore the configuration details of your API Gateway domain names to understand their current status, security policies, and associated certificates. This can help in managing your domain names and ensuring their secure and optimal operation."

select
  domain_name,
  config ->> 'EndpointType' as endpoint_type,
  config ->> 'CertificateName' as certificate_name,
  config ->> 'CertificateArn' as certificate_arn,
  config ->> 'CertificateUploadDate' as certificate_upload_date,
  config ->> 'DomainNameStatus' as domain_name_status,
  config ->> 'DomainNameStatusMessage' as domain_name_status_message,
  config ->> 'ApiGatewayDomainName' as api_gateway_domain_name,
  config ->> 'HostedZoneId' as hosted_zone_id,
  config ->> 'OwnershipVerificationCertificateArn' as ownership_verification_certificate_arn,
  config -> 'SecurityPolicy' as security_policy
from
  aws_api_gatewayv2_domain_name
  cross join jsonb_array_elements(domain_name_configurations) as config;
select
  domain_name,
  json_extract(config.value, '$.EndpointType') as endpoint_type,
  json_extract(config.value, '$.CertificateName') as certificate_name,
  json_extract(config.value, '$.CertificateArn') as certificate_arn,
  json_extract(config.value, '$.CertificateUploadDate') as certificate_upload_date,
  json_extract(config.value, '$.DomainNameStatus') as domain_name_status,
  json_extract(config.value, '$.DomainNameStatusMessage') as domain_name_status_message,
  json_extract(config.value, '$.ApiGatewayDomainName') as api_gateway_domain_name,
  json_extract(config.value, '$.HostedZoneId') as hosted_zone_id,
  json_extract(config.value, '$.OwnershipVerificationCertificateArn') as ownership_verification_certificate_arn,
  json_extract(config.value, '$.SecurityPolicy') as security_policy
from
  aws_api_gatewayv2_domain_name,
  json_each(domain_name_configurations) as config;

Get mutual TLS authentication configuration of each domain name

Explore the setup of mutual TLS authentication for each domain name, focusing on the truststore details. This can be beneficial for understanding the security measures in place and identifying any potential warnings or issues.Explore the configuration of mutual TLS authentication for each domain name, which can help you identify potential security issues and ensure that your domains are properly secured. This can be particularly useful for maintaining compliance and identifying any domains that may require additional security measures.

select
  domain_name,
  mutual_tls_authentication ->> 'TruststoreUri' as truststore_uri,
  mutual_tls_authentication ->> 'TruststoreVersion' as truststore_version,
  mutual_tls_authentication ->> 'TruststoreWarnings' as truststore_warnings
from
  aws_api_gatewayv2_domain_name;
select
  domain_name,
  json_extract(mutual_tls_authentication, '$.TruststoreUri') as truststore_uri,
  json_extract(mutual_tls_authentication, '$.TruststoreVersion') as truststore_version,
  json_extract(mutual_tls_authentication, '$.TruststoreWarnings') as truststore_warnings
from
  aws_api_gatewayv2_domain_name;

Get certificate details of each domain names

Determine the specifics of certificates associated with each domain name, including their creation and issuance details, key algorithm, and transparency logging preferences. This can help in managing and maintaining the security aspects of your domain names.This query allows you to examine the details of certificates associated with each domain name. It's useful for understanding the security measures in place for your domains, such as the issuing authority, creation date, and key algorithm.

select
  d.domain_name,
  config ->> 'CertificateArn' as certificate_arn,
  c.certificate,
  c.certificate_transparency_logging_preference,
  c.created_at,
  c.imported_at,
  c.issuer,
  c.issued_at,
  c.key_algorithm
from
  aws_api_gatewayv2_domain_name AS d
  cross join jsonb_array_elements(d.domain_name_configurations) AS config
  left join aws_acm_certificate AS c ON c.certificate_arn = config ->> 'CertificateArn';
select
  d.domain_name,
  json_extract(config.value, '$.CertificateArn') as certificate_arn,
  c.certificate,
  c.certificate_transparency_logging_preference,
  c.created_at,
  c.imported_at,
  c.issuer,
  c.issued_at,
  c.key_algorithm
from
  aws_api_gatewayv2_domain_name AS d,
  json_each(d.domain_name_configurations) AS config
  left join aws_acm_certificate AS c ON c.certificate_arn = json_extract(config.value, '$.CertificateArn');
title description
Steampipe Table: aws_api_gatewayv2_integration - Query AWS API Gateway Integrations using SQL
Allows users to query AWS API Gateway Integrations to retrieve detailed information about each integration within the API Gateway.

Table: aws_api_gatewayv2_integration - Query AWS API Gateway Integrations using SQL

The AWS API Gateway Integrations is a feature within the Amazon API Gateway service that allows you to integrate backend operations such as Lambda functions, HTTP endpoints, and other AWS services into your API. These integrations enable your API to interact with these services, processing incoming requests and returning responses to the client. This functionality aids in creating efficient, scalable, and secure APIs.

Table Usage Guide

The aws_api_gatewayv2_integration table in Steampipe provides you with information about each integration within AWS API Gateway. This table allows you as a DevOps engineer to query integration-specific details, including the integration type, API Gateway ID, integration method, and more. You can utilize this table to gather insights on integrations, such as integration protocols, request templates, and connection type. The schema outlines the various attributes of the integration for you, including the integration ID, integration response selection expression, integration subtype, and associated tags.

Examples

Basic info

Determine the areas in which specific integrations are being used within AWS API Gateway. This can help in understanding the scope and purpose of these integrations, aiding in efficient system management and optimization.Explore the different types of integrations within your AWS API Gateway and understand their respective roles and characteristics. This can help you manage and optimize your API configurations effectively.

select
  integration_id,
  api_id,
  integration_type,
  integration_uri,
  description
from
  aws_api_gatewayv2_integration;
select
  integration_id,
  api_id,
  integration_type,
  integration_uri,
  description
from
  aws_api_gatewayv2_integration;

Count of integrations per API

Explore which APIs have the most integrations to identify potential areas of complexity or high usage. This can help in managing resources and planning future developments.Analyze the distribution of integrations across different APIs to understand the extent of their utilization. This can help identify heavily used APIs, potentially indicating areas for performance optimization or resource allocation.

select 
  api_id,
  count(integration_id) as integration_count
from 
  aws_api_gatewayv2_integration
group by
  api_id;
select 
  api_id,
  count(integration_id) as integration_count
from 
  aws_api_gatewayv2_integration
group by
  api_id;
title description
Steampipe Table: aws_api_gatewayv2_route - Query AWS API Gateway V2 Routes using SQL
Allows users to query AWS API Gateway V2 Routes and obtain detailed information about each route, including the route key, route response selection expression, and target.

Table: aws_api_gatewayv2_route - Query AWS API Gateway V2 Routes using SQL

The AWS API Gateway V2 Routes is a feature within the Amazon API Gateway service. It allows you to define the paths that a client application can take to access your API. This feature is integral to the process of creating, deploying, and managing your APIs in a secure and scalable manner.

Table Usage Guide

The aws_api_gatewayv2_route table in Steampipe provides you with information about routes within AWS API Gateway V2. This table allows you, as a DevOps engineer, to query route-specific details, including the route key, route response selection expression, and target. You can utilize this table to gather insights on routes, such as route configurations, route response behaviors, and more. The schema outlines the various attributes of the route for you, including the API identifier, route ID, route key, and associated metadata.

Examples

Basic info

Determine the areas in which your AWS API Gateway is managed and if an API key is required. This can help in identifying potential security risks and ensuring appropriate access controls are in place.

select
  route_key,
  api_id,
  route_id,
  api_gateway_managed,
  api_key_required
from
  aws_api_gatewayv2_route;
select
  route_key,
  api_id,
  route_id,
  api_gateway_managed,
  api_key_required
from
  aws_api_gatewayv2_route;

List routes by API

Explore which routes are associated with a specific API to better manage and optimize your API Gateway. This can be particularly useful for troubleshooting or for identifying opportunities for API performance enhancement.

select
  route_key,
  api_id,
  route_id
from
  aws_api_gatewayv2_route
where
  api_id = 'w5n71b2m85';
select
  route_key,
  api_id,
  route_id
from
  aws_api_gatewayv2_route
where
  api_id = 'w5n71b2m85';

List routes with default endpoint enabled APIs

Identify the instances where the default endpoint is enabled in APIs, allowing you to understand and manage the routes that are directly accessible.

select
  r.route_id,
  a.name,
  a.api_id,
  a.api_endpoint
from
  aws_api_gatewayv2_route as r,
  aws_api_gatewayv2_api as a
where
  not a.disable_execute_api_endpoint;
select
  r.route_id,
  a.name,
  a.api_id,
  a.api_endpoint
from
  aws_api_gatewayv2_route as r,
  aws_api_gatewayv2_api as a
where
  a.disable_execute_api_endpoint != 1;
title description
Steampipe Table: aws_api_gatewayv2_stage - Query AWS API Gateway Stages using SQL
Allows users to query AWS API Gateway Stages, providing detailed information about each stage of the API Gateway.

Table: aws_api_gatewayv2_stage - Query AWS API Gateway Stages using SQL

The AWS API Gateway Stage is a crucial component within the AWS API Gateway service. It represents a phase in the lifecycle of an API (like development, production, or beta) that an application developer interacts with. Stages are accompanied by a stage name, deployment identifier, and a description, and they allow for the routing of incoming API calls to various backend endpoints.

Table Usage Guide

The aws_api_gatewayv2_stage table in Steampipe provides you with information about stages within AWS API Gateway. This table allows you, as a DevOps engineer, to query stage-specific details, including default route settings, deployment ID, description, and associated metadata. You can utilize this table to gather insights on stages, such as the last updated time of the stage, stage variables, auto deployment details, and more. The schema outlines for you the various attributes of the API Gateway stage, including the stage name, API ID, created date, and associated tags.

Examples

List of API gateway V2 stages which does not send logs to cloud watch log

Identify instances where API Gateway stages are not configured to send logs to Cloud Watch, which could help in troubleshooting and analyzing API performance.

select
  stage_name,
  api_id,
  default_route_data_trace_enabled
from
  aws_api_gatewayv2_stage
where
  not default_route_data_trace_enabled;
select
  stage_name,
  api_id,
  default_route_data_trace_enabled
from
  aws_api_gatewayv2_stage
where
  default_route_data_trace_enabled = 0;

Default route settings info of each API gateway V2 stage

Explore the default settings of each stage in your API gateway to understand how data tracing, detailed metrics, and throttling limits are configured. This helps in managing your API effectively by fine-tuning these settings as per your requirements.

select
  stage_name,
  api_id,
  default_route_data_trace_enabled,
  default_route_detailed_metrics_enabled,
  default_route_throttling_burst_limit,
  default_route_throttling_rate_limit
from
  aws_api_gatewayv2_stage;
select
  stage_name,
  api_id,
  default_route_data_trace_enabled,
  default_route_detailed_metrics_enabled,
  default_route_throttling_burst_limit,
  default_route_throttling_rate_limit
from
  aws_api_gatewayv2_stage;

Count of API gateway V2 stages by APIs

Determine the quantity of stages each API Gateway has, which can be useful for understanding the complexity and scale of each individual API.

select
  api_id,
  count(stage_name) stage_count
from
  aws_api_gatewayv2_stage
group by
  api_id;
select
  api_id,
  count(stage_name) as stage_count
from
  aws_api_gatewayv2_stage
group by
  api_id;

Get access log settings of API gateway V2 stages

Discover the configuration settings of different stages in API gateway V2 to better understand and manage access logs and data tracing. This can be useful for enhancing security and troubleshooting issues.

select
  stage_name,
  api_id,
  default_route_data_trace_enabled,
  jsonb_pretty(access_log_settings) as access_log_settings
from
  aws_api_gatewayv2_stage;
select
  stage_name,
  api_id,
  default_route_data_trace_enabled,
  access_log_settings
from
  aws_api_gatewayv2_stage;
title description
Steampipe Table: aws_appautoscaling_policy - Query AWS Application Auto Scaling Policies using SQL
Allows users to query AWS Application Auto Scaling Policies to obtain information about their configuration, attached resources, and other metadata.

Table: aws_appautoscaling_policy - Query AWS Application Auto Scaling Policies using SQL

The AWS Application Auto Scaling Policies allow you to manage the scaling of your applications in response to their demand patterns. They enable automatic adjustments to the scalable target capacity as needed to maintain optimal resource utilization. This ensures that your applications always have the right resources at the right time, improving their performance and reducing costs.

Table Usage Guide

The aws_appautoscaling_policy table in Steampipe provides you with information about Application Auto Scaling policies in AWS. This table allows you, as a DevOps engineer, system administrator, or other technical professional, to query policy-specific details, including the scaling target, scaling dimensions, and associated metadata. You can utilize this table to gather insights on policies, such as policy configurations, attached resources, scaling activities, and more. The schema outlines the various attributes of the Application Auto Scaling policy, including the policy ARN, policy type, creation time, and associated tags for you.

Examples

Basic info

Analyze the settings to understand the policies associated with your AWS ECS service. This can help in managing the scaling behavior of the resources in the ECS service, identifying the dimensions that are scalable, and the time of policy creation.

select
  service_namespace,
  scalable_dimension,
  policy_type,
  resource_id,
  creation_time
from
  aws_appautoscaling_policy
where
  service_namespace = 'ecs';
select
  service_namespace,
  scalable_dimension,
  policy_type,
  resource_id,
  creation_time
from
  aws_appautoscaling_policy
where
  service_namespace = 'ecs';

List policies for ECS services with policy type Step scaling

Determine the areas in which step scaling policies are applied for ECS services. This can help in managing and optimizing resource allocation for your applications.

select
  resource_id,
  policy_type
from
  aws_appautoscaling_policy
where
  service_namespace = 'ecs'
  and policy_type = 'StepScaling';
select
  resource_id,
  policy_type
from
  aws_appautoscaling_policy
where
  service_namespace = 'ecs'
  and policy_type = 'StepScaling';

List policies for ECS services created in the last 30 days

Identify recent policy changes for your ECS services. This query is useful for monitoring and managing your autoscaling configuration, allowing you to track changes made within the last month.

select
  resource_id,
  policy_type
from
  aws_appautoscaling_policy
where
  service_namespace = 'ecs'
  and creation_time > now() - interval '30 days';
select
  resource_id,
  policy_type
from
  aws_appautoscaling_policy
where
  service_namespace = 'ecs'
  and creation_time > datetime('now','-30 days');

Get the CloudWatch alarms associated with the Auto Scaling policy

Determine the areas in which CloudWatch alarms are linked to an Auto Scaling policy. This can be beneficial in understanding the alarm triggers and managing resources within the Elastic Container Service (ECS).

select
  resource_id,
  policy_type,
  jsonb_array_elements(alarms) -> 'AlarmName' as alarm_name
from
  aws_appautoscaling_policy
where
  service_namespace = 'ecs';
select
  resource_id,
  policy_type,
  json_extract(json_each.value, '$.AlarmName') as alarm_name
from
  aws_appautoscaling_policy,
  json_each(alarms)
where
  service_namespace = 'ecs';

Get the configuration for Step scaling type policies

Explore the setup of step scaling policies within the ECS service namespace to understand how application auto scaling is configured.

select
  resource_id,
  policy_type,
  step_scaling_policy_configuration
from
  aws_appautoscaling_policy
where
  service_namespace = 'ecs'
  and policy_type = 'StepScaling';
select
  resource_id,
  policy_type,
  step_scaling_policy_configuration
from
  aws_appautoscaling_policy
where
  service_namespace = 'ecs'
  and policy_type = 'StepScaling';
title description
Steampipe Table: aws_appautoscaling_target - Query AWS Application Auto Scaling Targets using SQL
Allows users to query AWS Application Auto Scaling Targets. This table provides information about each target, including the service namespace, scalable dimension, resource ID, and the associated scaling policies.

Table: aws_appautoscaling_target - Query AWS Application Auto Scaling Targets using SQL

The AWS Application Auto Scaling Targets are used to manage scalable targets within AWS services. These targets can be any resource that can scale in or out, such as an Amazon ECS service, an Amazon EC2 Spot Fleet request, or an Amazon RDS read replica. The Application Auto Scaling service automatically adjusts the resource's capacity to maintain steady, predictable performance at the lowest possible cost.

Table Usage Guide

The aws_appautoscaling_target table in Steampipe provides you with information about each target within AWS Application Auto Scaling. This table allows you, as a DevOps engineer, to query target-specific details, including the service namespace, scalable dimension, resource ID, and the associated scaling policies. You can utilize this table to gather insights on scaling targets, such as the min and max capacity, role ARN, and more. The schema outlines the various attributes of the scaling target for you, including the resource ID, scalable dimension, creation time, and associated tags.

Examples

Basic info

Explore the creation timeline of resources within the AWS DynamoDB service, which can help in understanding their scalability dimensions and facilitate efficient resource management.

select
  service_namespace,
  scalable_dimension,
  resource_id,
  creation_time
from
  aws_appautoscaling_target
where
  service_namespace = 'dynamodb';
select
  service_namespace,
  scalable_dimension,
  resource_id,
  creation_time
from
  aws_appautoscaling_target
where
  service_namespace = 'dynamodb';

List targets for DynamoDB tables with read or write auto scaling enabled

Determine the areas in which auto-scaling is enabled for read or write operations in DynamoDB tables. This is useful in managing resources efficiently and optimizing cost by ensuring that scaling only occurs when necessary.

select
  resource_id,
  scalable_dimension
from
  aws_appautoscaling_target
where
  service_namespace = 'dynamodb'
  and scalable_dimension = 'dynamodb:table:ReadCapacityUnits'
  or scalable_dimension = 'dynamodb:table:WriteCapacityUnits';
select
  resource_id,
  scalable_dimension
from
  aws_appautoscaling_target
where
  service_namespace = 'dynamodb'
  and (scalable_dimension = 'dynamodb:table:ReadCapacityUnits'
  or scalable_dimension = 'dynamodb:table:WriteCapacityUnits');
title description
Steampipe Table: aws_appconfig_application - Query AWS AppConfig Applications using SQL
Allows users to query AWS AppConfig Applications to gather detailed information about each application, including its name, description, associated environments, and more.

Table: aws_appconfig_application - Query AWS AppConfig Applications using SQL

The AWS AppConfig Application is a feature of AWS AppConfig, which is a service that enables you to create, manage, and quickly deploy application configurations. It is designed to use AWS Lambda, Amazon ECS, Amazon S3, and other AWS services. AWS AppConfig Application helps you manage the configurations of your Amazon Web Services applications in a centralized manner, reducing error and increasing speed in deployment.

Table Usage Guide

The aws_appconfig_application table in Steampipe provides you with information about AWS AppConfig Applications. This table allows you, as a DevOps engineer or other technical professional, to query application-specific details, including its ID, name, description, and associated environments. You can utilize this table to gather insights on applications, such as their deployment strategies, associated configurations, and more. The schema outlines the various attributes of the AppConfig application for you, including the application ID, name, description, and associated tags.

Examples

Basic info

Explore which AWS AppConfig applications are currently in use. This can help you manage and monitor your applications effectively, ensuring they're configured correctly and align with your operational requirements.

select
  arn,
  id,
  name,
  description,
  tags
from
  aws_appconfig_application;
select
  arn,
  id,
  name,
  description,
  tags
from
  aws_appconfig_application;
title description
Steampipe Table: aws_appstream_fleet - Query AWS AppStream Fleet using SQL
Allows users to query AWS AppStream Fleets for detailed information about each fleet, including its state, instance type, and associated stack details.

Table: aws_appstream_fleet - Query AWS AppStream Fleet using SQL

The AWS AppStream Fleet is a part of Amazon AppStream 2.0, a fully managed, secure application streaming service that allows you to stream desktop applications from AWS to any device running a web browser. It provides users instant-on access to the applications they need, and a responsive, fluid user experience on the device of their choice. An AppStream Fleet consists of streaming instances that run the image builder to stream applications to users.

Table Usage Guide

The aws_appstream_fleet table in Steampipe provides you with information about fleets within AWS AppStream. This table allows you, as a DevOps engineer, to query fleet-specific details, including the fleet state, instance type, associated stack details, and more. You can utilize this table to gather insights on fleets, such as the fleet's current capacity, the fleet's idle disconnect timeout settings, and the fleet's stream view. The schema outlines the various attributes of the AppStream Fleet for you, including the fleet ARN, creation time, fleet type, and associated tags.

Examples

Basic info

Explore the characteristics of your AWS AppStream fleet, such as its creation time, state, and whether default internet access is enabled. This can help you understand the configuration and status of your fleet for better resource management.

select
  name,
  arn,
  instance_type,
  description,
  created_time,
  display_name,
  state,
  directory_name,
  enable_default_internet_access
from
  aws_appstream_fleet;
select
  name,
  arn,
  instance_type,
  description,
  created_time,
  display_name,
  state,
  directory_name,
  enable_default_internet_access
from
  aws_appstream_fleet;

List fleets that have default internet access anabled

Determine the fleets that have their default internet access enabled. This is beneficial for assessing which fleets are potentially exposed to internet-based threats, thereby assisting in risk management and security planning.

select
  name,
  arn,
  instance_type,
  description,
  created_time,
  display_name,
  state,
  enable_default_internet_access
from
  aws_appstream_fleet
where enable_default_internet_access;
select
  name,
  arn,
  instance_type,
  description,
  created_time,
  display_name,
  state,
  enable_default_internet_access
from
  aws_appstream_fleet
where enable_default_internet_access = 1;

List on-demand fleets

Identify instances where on-demand fleets in AWS AppStream are being used, allowing users to understand the scope and details of their on-demand resource utilization. This information can be valuable for cost management and resource allocation strategies.

select
  name,
  created_time,
  fleet_type,
  instance_type,
  display_name,
  image_arn,
  image_name
from
  aws_appstream_fleet
where
  fleet_type = 'ON_DEMAND';
select
  name,
  created_time,
  fleet_type,
  instance_type,
  display_name,
  image_arn,
  image_name
from
  aws_appstream_fleet
where
  fleet_type = 'ON_DEMAND';

List fleets that are created in last 30 days

Discover the segments that have been established within the last month to understand their internet access status, maximum concurrent sessions, and user duration limits. This can be beneficial for assessing recent changes or additions to your fleet configurations.

select
  name,
  created_time,
  display_name,
  enable_default_internet_access,
  max_concurrent_sessions,
  max_user_duration_in_seconds
from
  aws_appstream_fleet
where
  created_time >= now() - interval '30' day;
select
  name,
  created_time,
  display_name,
  enable_default_internet_access,
  max_concurrent_sessions,
  max_user_duration_in_seconds
from
  aws_appstream_fleet
where
  created_time >= datetime('now','-30 day');

List fleets that are using private images

Explore which fleets are utilizing private images, allowing you to assess the level of privacy and security in your AWS AppStream fleets. This can be particularly useful in managing resource allocation and ensuring compliance with internal policies regarding data privacy.

select
  f.name,
  f.created_time,
  f.display_name,
  f.image_arn,
  i.base_image_arn,
  i.image_builder_name,
  i.visibility
from
  aws_appstream_fleet as f,
  aws_appstream_image as i
where
  i.arn = f.image_arn
and
  i.visibility = 'PRIVATE';
select
  f.name,
  f.created_time,
  f.display_name,
  f.image_arn,
  i.base_image_arn,
  i.image_builder_name,
  i.visibility
from
  aws_appstream_fleet as f,
  aws_appstream_image as i
where
  i.arn = f.image_arn
and
  i.visibility = 'PRIVATE';

Get compute capacity status of each fleet

Assess the elements within each fleet in terms of compute capacity to ensure efficient resource management and optimal performance. This can help in identifying any discrepancies between desired and actual usage, thereby aiding in capacity planning and optimization.

select
  name,
  arn,
  compute_capacity_status ->> 'Available' as available,
  compute_capacity_status ->> 'Desired' as desired,
  compute_capacity_status ->> 'InUse' as in_use,
  compute_capacity_status ->> 'Running' as running
from
  aws_appstream_fleet;
select
  name,
  arn,
  json_extract(compute_capacity_status, '$.Available') as available,
  json_extract(compute_capacity_status, '$.Desired') as desired,
  json_extract(compute_capacity_status, '$.InUse') as in_use,
  json_extract(compute_capacity_status, '$.Running') as running
from
  aws_appstream_fleet;

Get error details of failed images

Identify instances where images have failed within the AWS AppStream fleet by analyzing the associated error codes and messages. This can assist in troubleshooting and rectifying issues promptly.

select
  name,
  arn,
  e ->> 'ErrorCode' as error_code,
  e ->> 'ErrorMessage' as error_message
from
  aws_appstream_fleet,
  jsonb_array_elements(fleet_errors) as e;
select
  name,
  arn,
  json_extract(e.value, '$.ErrorCode') as error_code,
  json_extract(e.value, '$.ErrorMessage') as error_message
from
  aws_appstream_fleet,
  json_each(fleet_errors) as e;

Get VPC config details of each fleet

Analyze the settings to understand the configuration details of each fleet in your AWS Appstream service. This can help in managing network access and security for your fleets by identifying their associated security groups and subnets.

select
  name,
  arn,
  vpc_config -> 'SecurityGroupIds' as security_group_ids,
  vpc_config -> 'SubnetIds' as subnet_ids
from
  aws_appstream_fleet;
select
  name,
  arn,
  json_extract(vpc_config, '$.SecurityGroupIds') as security_group_ids,
  json_extract(vpc_config, '$.SubnetIds') as subnet_ids
from
  aws_appstream_fleet;

Count fleets by instance type

Identify the variety of fleets based on their instance type within your AWS AppStream service. This can help optimize resource allocation by showing where the most and least populated instance types are.

select
  name,
  instance_type,
  Count(instance_type) as number_of_fleets
from
  aws_appstream_fleet
group by
  instance_type,
  name;
select
  name,
  instance_type,
  Count(instance_type) as number_of_fleets
from
  aws_appstream_fleet
group by
  instance_type,
  name;

List fleets that are in running state

Explore which fleets are currently active and operational. This is useful for monitoring the status of your resources and ensuring they are functioning as expected.

select
  name,
  arn,
  state,
  created_time,
  description
from
  aws_appstream_fleet
where
  state = 'RUNNING';
select
  name,
  arn,
  state,
  created_time,
  description
from
  aws_appstream_fleet
where
  state = 'RUNNING';
title description
Steampipe Table: aws_appstream_image - Query AWS AppStream Images using SQL
Allows users to query AWS AppStream Images to gain insights into their properties, states, and associated metadata.

Table: aws_appstream_image - Query AWS AppStream Images using SQL

AWS AppStream Images are part of Amazon AppStream 2.0, a fully managed, secure application streaming service that allows you to stream desktop applications from AWS to any device running a web browser. These images act as templates for the creation of streaming instances, containing all the necessary applications, drivers, and settings. Administrators can create, maintain, and use these images to provide a consistent user experience, regardless of the device being used.

Table Usage Guide

The aws_appstream_image table in Steampipe provides you with information about images within AWS AppStream. This table allows you as a DevOps engineer to query image-specific details, including the image's name, ARN, state, platform, and associated metadata. You can utilize this table to gather insights on images, such as their visibility, status, and the applications they are associated with. The schema outlines the various attributes of the AppStream Image for you, including the image ARN, creation time, visibility status, and associated tags.

Examples

Basic info

Explore the details of your AWS AppStream images to understand their configuration and attributes. This can be beneficial in managing your resources and ensuring they are optimally configured.

select
  name,
  arn,
  base_image_arn,
  description,
  created_time,
  display_name,
  image_builder_name,
  tags
from
  aws_appstream_image;
select
  name,
  arn,
  base_image_arn,
  description,
  created_time,
  display_name,
  image_builder_name,
  tags
from
  aws_appstream_image;

List available images

Determine the areas in which AWS AppStream images are available for use. This is useful for understanding what resources are currently usable in your environment.

select
  name,
  arn,
  display_name,
  platform,
  state
from
  aws_appstream_image
where
  state = 'AVAILABLE';
select
  name,
  arn,
  display_name,
  platform,
  state
from
  aws_appstream_image
where
  state = 'AVAILABLE';

List Windows based images

Identify instances where Windows based images are used within the AWS Appstream service. This is beneficial for auditing purposes, ensuring the correct platform is being utilized.

select
  name,
  created_time,
  base_image_arn,
  display_name,
  image_builder_supported,
  image_builder_name
from
  aws_appstream_image
where
  platform = 'WINDOWS';
select
  name,
  created_time,
  base_image_arn,
  display_name,
  image_builder_supported,
  image_builder_name
from
  aws_appstream_image
where
  platform = 'WINDOWS';

List images that support image builder

Identify the AWS AppStream images that are compatible with the image builder feature. This is useful to ensure your applications are using images that support this functionality for streamlined image creation and management.

select
  name,
  created_time,
  base_image_arn,
  display_name,
  image_builder_supported,
  image_builder_name
from
  aws_appstream_image
where
  image_builder_supported;
select
  name,
  created_time,
  base_image_arn,
  display_name,
  image_builder_supported,
  image_builder_name
from
  aws_appstream_image
where
  image_builder_supported = 1;

List private images

Explore which AppStream images are set to private to manage access and ensure security. This can help identify instances where images may need to be shared or restricted further.

select
  name,
  created_time,
  base_image_arn,
  display_name,
  image_builder_name,
  visibility
from
  aws_appstream_image
where
  visibility = 'PRIVATE';
select
  name,
  created_time,
  base_image_arn,
  display_name,
  image_builder_name,
  visibility
from
  aws_appstream_image
where
  visibility = 'PRIVATE';

Get application details of images

Explore the various attributes of applications within images, such as creation time, display name, and platform compatibility. This can be useful to understand the application's configuration and behavior for effective management and troubleshooting.

select
  name,
  arn,
  a ->> 'AppBlockArn' as app_block_arn,
  a ->> 'Arn' as app_arn,
  a ->> 'CreatedTime' as app_created_time,
  a ->> 'Description' as app_description,
  a ->> 'DisplayName' as app_display_name,
  a ->> 'Enabled' as app_enabled,
  a ->> 'IconS3Location' as app_icon_s3_location,
  a ->> 'IconURL' as app_icon_url,
  a ->> 'InstanceFamilies' as app_instance_families,
  a ->> 'LaunchParameters' as app_launch_parameters,
  a ->> 'LaunchPath' as app_launch_path,
  a ->> 'Name' as app_name,
  a ->> 'Platforms' as app_platforms,
  a ->> 'WorkingDirectory' as app_WorkingDirectory
from
  aws_appstream_image,
  jsonb_array_elements(applications) as a;
select
  name,
  arn,
  json_extract(a.value, '$.AppBlockArn') as app_block_arn,
  json_extract(a.value, '$.Arn') as app_arn,
  json_extract(a.value, '$.CreatedTime') as app_created_time,
  json_extract(a.value, '$.Description') as app_description,
  json_extract(a.value, '$.DisplayName') as app_display_name,
  json_extract(a.value, '$.Enabled') as app_enabled,
  json_extract(a.value, '$.IconS3Location') as app_icon_s3_location,
  json_extract(a.value, '$.IconURL') as app_icon_url,
  json_extract(a.value, '$.InstanceFamilies') as app_instance_families,
  json_extract(a.value, '$.LaunchParameters') as app_launch_parameters,
  json_extract(a.value, '$.LaunchPath') as app_launch_path,
  json_extract(a.value, '$.Name') as app_name
from
  aws_appstream_image,
  json_each(applications) as a;

Get the permission model of the images

Determine the access permissions of specific images within your AWS AppStream service. This query is useful if you want to understand which images are accessible by your fleet and image builder, providing insights into your resource utilization and access control.

select
  name,
  arn,
  image_permissions ->> 'AllowFleet' as allow_fleet,
  image_permissions ->> 'AllowImageBuilder' as allow_image_builder
from
  aws_appstream_image;
select
  name,
  arn,
  json_extract(image_permissions, '$.AllowFleet') as allow_fleet,
  json_extract(image_permissions, '$.AllowImageBuilder') as allow_image_builder
from
  aws_appstream_image;

Get error details of failed images

Discover the segments that contain failed images within your AWS AppStream environment. This query can be used to identify and analyze the issues causing image failures, helping to improve the efficiency and reliability of your AppStream services.

select
  name,
  arn,
  e ->> 'ErrorCode' as error_code,
  e ->> 'ErrorMessage' as error_message,
  e ->> 'ErrorTimestamp' as error_timestamp
from
  aws_appstream_image,
  jsonb_array_elements(image_errors) as e;
select
  name,
  arn,
  json_extract(e.value, '$.ErrorCode') as error_code,
  json_extract(e.value, '$.ErrorMessage') as error_message,
  json_extract(e.value, '$.ErrorTimestamp') as error_timestamp
from
  aws_appstream_image,
  json_each(image_errors) as e;
title description
Steampipe Table: aws_appsync_graphql_api - Query AWS AppSync GraphQL API using SQL
Allows users to query AppSync GraphQL APIs to retrieve detailed information about each individual GraphQL API.

Table: aws_appsync_graphql_api - Query AWS AppSync GraphQL APIs using SQL

AWS AppSync is a fully managed service provided by Amazon Web Services (AWS) that simplifies the development of scalable and secure GraphQL APIs. GraphQL is a query language for APIs that allows clients to request only the data they need, making it more efficient and flexible compared to traditional REST APIs.

Table Usage Guide

The aws_appsync_graphql_api table in Steampipe provides you with information about GraphQL API within AWS Athena. This table allows you, as a data analyst or developer, to GraphQL API specific details, including authentication type, owner of the API, and log configuration details of the API.

Examples

List all merged APIs

A merged GraphQL API typically refers to a GraphQL API that aggregates or combines data from multiple sources into a single, unified GraphQL schema. This approach is often used to create a single, cohesive interface for clients, even when the underlying data comes from different services, databases, or microservices.

select
  name,
  api_id,
  arn,
  api_type,
  authentication_type,
  owner,
  owner_contact
from
  aws_appsync_graphql_api
where
  api_type = 'MERGED';
select
  name,
  api_id,
  arn,
  api_type,
  authentication_type,
  owner,
  owner_contact
from
  aws_appsync_graphql_api
where
  api_type = 'MERGED';

List public APIs of the current account

A public AppSync GraphQL API is accessible over the internet, and clients outside of your AWS account can make requests to it. Public APIs are typically configured with an authentication mechanism to control and secure access. Common authentication methods include API keys and OpenID Connect (OIDC) integration with an identity provider.

select
  name,
  api_id,
  api_type,
  visibility
from
  aws_appsync_graphql_api
where
  visibility = 'GLOBAL'
  and owner = account_id;
select
  name,
  api_id,
  api_type,
  visibility
from
  aws_appsync_graphql_api
where
  visibility = 'GLOBAL'
  and owner = account_id;

Get the log configuration details of APIs

Discover the queries that have the longest execution times to identify potential areas for performance optimization and enhance the efficiency of your AWS Athena operations.

select
  name,
  api_id,
  owner,
  log_config ->> 'CloudWatchLogsRoleArn' as cloud_watch_logs_role_arn,
  log_config ->> 'FieldLogLevel' as field_log_level,
  log_config ->> 'ExcludeVerboseContent' as exclude_verbose_content
from
  aws_appsync_graphql_api;
select
  name,
  api_id,
  owner,
  json_extract(log_config, '$.CloudWatchLogsRoleArn') as cloud_watch_logs_role_arn,
  json_extract(log_config, '$.FieldLogLevel') as field_log_level,
  json_extract(log_config, '$.ExcludeVerboseContent') as exclude_verbose_content
from
  aws_appsync_graphql_api;
title description
Steampipe Table: aws_athena_query_execution - Query AWS Athena Query Executions using SQL
Allows users to query AWS Athena Query Executions to retrieve detailed information about each individual query execution.

Table: aws_athena_query_execution - Query AWS Athena Query Executions using SQL

AWS Athena Query Execution is a feature of Amazon Athena that allows you to run SQL queries on data stored in Amazon S3. It executes queries using an interactive query service that leverages standard SQL. This enables you to analyze data directly in S3 without the need for complex ETL jobs.

Table Usage Guide

The aws_athena_query_execution table in Steampipe provides you with information about query executions within AWS Athena. This table allows you, as a data analyst or developer, to query execution-specific details, including execution status, result configuration, and associated metadata. You can utilize this table to track the progress of queries, analyze the performance of queries, and understand the cost of running specific queries. The schema outlines the various attributes of the Athena query execution for you, including the query execution id, query, output location, data scanned, and execution time.

Examples

List all queries in error

Explore which queries have resulted in errors to understand the issues and rectify them accordingly. This is useful in identifying and resolving potential problems within your AWS Athena query execution.

select
  id,
  query,
  error_message,
  error_type
from
  aws_athena_query_execution
where
  error_message is not null;
select
  id,
  query,
  error_message,
  error_type
from
  aws_athena_query_execution
where
  error_message is not null;

Estimate data read by each workgroup

Analyze the volume of data processed by each workgroup to understand workload distribution and optimize resources accordingly. This can be useful in identifying workgroups that are processing large amounts of data and may require additional resources or optimization.

select 
  workgroup, 
  sum(data_scanned_in_bytes) 
from 
  aws_athena_query_execution
group by 
  workgroup;
select 
  workgroup, 
  sum(data_scanned_in_bytes) 
from 
  aws_athena_query_execution
group by 
  workgroup;

Find queries with biggest execution time

Discover the queries that have the longest execution times to identify potential areas for performance optimization and enhance the efficiency of your AWS Athena operations.

select
  id,
  query,
  workgroup,
  engine_execution_time_in_millis 
from
  aws_athena_query_execution 
order by
  engine_execution_time_in_millis limit 5;
select
  id,
  query,
  workgroup,
  engine_execution_time_in_millis 
from
  aws_athena_query_execution 
order by
  engine_execution_time_in_millis limit 5;

Find most used databases

Discover the databases that are frequently used in your AWS Athena environment. This can help optimize resource allocation and identify potential areas for performance improvement.

select
  database,
  count(id) as nb_query 
from
  aws_athena_query_execution 
group by
  database 
order by
  nb_query limit 5;
select
  database,
  count(id) as nb_query 
from
  aws_athena_query_execution 
group by
  database 
order by
  nb_query limit 5;
title description
Steampipe Table: aws_athena_workgroup - Query AWS Athena Workgroup using SQL
Allows users to query AWS Athena Workgroup details such as workgroup name, state, description, creation time, and more.

Table: aws_athena_workgroup - Query AWS Athena Workgroup using SQL

An AWS Athena Workgroup is a resource that acts as a primary server for running queries. It provides a means of managing query execution across multiple users and teams within an organization. This allows for better control over costs, performance, and security when querying data with Athena.

Table Usage Guide

The aws_athena_workgroup table in Steampipe provides you with information about workgroups within AWS Athena. This table allows you as a DevOps engineer to query workgroup-specific details, including workgroup name, state, description, creation time, and more. You can utilize this table to gather insights on workgroups, such as workgroup configurations, encryption configurations, and enforcement settings. The schema outlines the various attributes of the Athena workgroup for you, including the workgroup ARN, state, tags, and configuration details.

Examples

List all workgroups with basic information

Explore the various workgroups within your AWS Athena service to gain insights into their basic details such as name, description, and creation time. This can be useful for understanding your workgroup configuration and identifying any potential areas for optimization or reorganization.

select 
  name, 
  description, 
  effective_engine_version, 
  output_location, 
  creation_time 
from 
  aws_athena_workgroup 
order by 
  creation_time;
select 
  name, 
  description, 
  effective_engine_version, 
  output_location, 
  creation_time 
from 
  aws_athena_workgroup 
order by 
  creation_time;

List all workgroups using engine 3

Determine the areas in which workgroups are utilizing a specific version of the Athena engine. This is useful for assessing upgrade needs or understanding the distribution of engine versions across your workgroups.

select 
  name, 
  description 
from 
  aws_athena_workgroup 
where 
  effective_engine_version = 'Athena engine version 3';
select 
  name, 
  description 
from 
  aws_athena_workgroup 
where 
  effective_engine_version = 'Athena engine version 3';

Count workgroups in each region

Assess the distribution of workgroups across different regions to understand workload allocation and capacity planning. This can assist in identifying regions that may be under or over-utilized.

select 
  region, 
  count(*) 
from 
  aws_athena_workgroup 
group by 
  region;
select 
  region, 
  count(*) 
from 
  aws_athena_workgroup 
group by 
  region;

List disabled workgroups

Determine the areas in which workgroups are inactive, providing insights into resource usage and potential areas for optimization or re-allocation.

select 
  name, 
  description, 
  creation_time
from 
  aws_athena_workgroup 
where
  state = 'DISABLED';
select 
  name, 
  description, 
  creation_time
from 
  aws_athena_workgroup 
where
  state = 'DISABLED';
title description
Steampipe Table: aws_auditmanager_assessment - Query AWS Audit Manager Assessments using SQL
Allows users to query AWS Audit Manager Assessments to retrieve detailed information about each assessment.

Table: aws_auditmanager_assessment - Query AWS Audit Manager Assessments using SQL

The AWS Audit Manager Assessment is a feature of AWS Audit Manager that helps you continuously audit your AWS usage to simplify your risk management and compliance. It automates evidence collection to enable you to scale your audit capability as your AWS usage grows. This tool facilitates assessment of the effectiveness of your controls and helps you maintain continuous compliance by managing audits throughout their lifecycle.

Table Usage Guide

The aws_auditmanager_assessment table in Steampipe provides you with information about assessments within AWS Audit Manager. This table allows you, as a DevOps engineer, to query assessment-specific details, including the assessment status, scope, roles, and associated metadata. You can utilize this table to gather insights on assessments, such as assessment status, scope of the assessments, roles associated with the assessments, and more. The schema outlines the various attributes of the AWS Audit Manager assessment for you, including the assessment ID, name, description, status, and associated tags.

Examples

Basic info

Explore which AWS Audit Manager assessments are currently active and what their compliance types are. This can be useful for keeping track of your organization's compliance status and ensuring all assessments are functioning as expected.

select
  name,
  arn,
  status,
  compliance_type
from
  aws_auditmanager_assessment;
select
  name,
  arn,
  status,
  compliance_type
from
  aws_auditmanager_assessment;

List assessments with public audit bucket

This query is useful for identifying assessments that are associated with a public audit bucket. This can help in enhancing the security measures by pinpointing potential areas of vulnerability, as public audit buckets can be accessed by anyone.

select
  a.name,
  a.arn,
  a.assessment_report_destination,
  a.assessment_report_destination_type,
  b.bucket_policy_is_public as is_public_bucket
from
  aws_auditmanager_assessment as a
join aws_s3_bucket as b on a.assessment_report_destination = 's3://' || b.Name and b.bucket_policy_is_public;
select
  a.name,
  a.arn,
  a.assessment_report_destination,
  a.assessment_report_destination_type,
  b.bucket_policy_is_public as is_public_bucket
from
  aws_auditmanager_assessment as a
join aws_s3_bucket as b on a.assessment_report_destination = 's3://' || b.Name and b.bucket_policy_is_public;

List inactive assessments

Determine the areas in which assessments are not currently active, enabling you to focus resources on those that require attention or action.

select
  name,
  arn,
  status
from
  aws_auditmanager_assessment
where
  status <> 'ACTIVE';
select
  name,
  arn,
  status
from
  aws_auditmanager_assessment
where
  status != 'ACTIVE';
title description
Steampipe Table: aws_auditmanager_control - Query AWS Audit Manager Control using SQL
Allows users to query AWS Audit Manager Control data, providing information about controls within AWS Audit Manager. This table enables users to access detailed information about controls, such as control source, control type, description, and associated metadata.

Table: aws_auditmanager_control - Query AWS Audit Manager Control using SQL

The AWS Audit Manager Control is a feature within AWS Audit Manager that allows you to evaluate how well your AWS resource configurations align with established best practices. It helps you to simplify the compliance process and reduce risk by automating the collection of evidence of your AWS resource compliance with regulations and standards. The control feature allows for continuous auditing to ensure ongoing compliance.

Table Usage Guide

The aws_auditmanager_control table in Steampipe provides you with information about controls within AWS Audit Manager. This table allows you, as a DevOps engineer, to query control-specific details, including control source, control type, description, and associated metadata. You can utilize this table to gather insights on controls, such as their sources, types, descriptions, and more. The schema outlines the various attributes of the control for you, including the control id, name, type, source, description, and associated tags.

Examples

Basic info

Explore the basic information about the controls in AWS Audit Manager to understand their purpose and type. This can help in managing and assessing your AWS resources and environment effectively.

select
  name,
  id,
  description,
  type
from
  aws_auditmanager_control;
select
  name,
  id,
  description,
  type
from
  aws_auditmanager_control;

List custom audit manager controls

Discover the segments that consist of custom audit manager controls in your AWS environment. This can be particularly useful for understanding and managing your custom security and compliance configurations.

select
  name,
  id,
  type
from
  aws_auditmanager_control
where
  type = 'Custom';
select
  name,
  id,
  type
from
  aws_auditmanager_control
where
  type = 'Custom';
title description
Steampipe Table: aws_auditmanager_evidence - Query AWS Audit Manager Evidence using SQL
Allows users to query AWS Audit Manager Evidence, providing detailed information about evidence resources associated with assessments in AWS Audit Manager.

Table: aws_auditmanager_evidence - Query AWS Audit Manager Evidence using SQL

The AWS Audit Manager Evidence is a component of AWS Audit Manager service that automates the collection and organization of evidence for audits. It simplifies the process of gathering necessary documents to demonstrate to auditors that your controls are operating effectively. This resource assists in continuously auditing your AWS usage to simplify risk assessment and compliance with regulations and industry standards.

Table Usage Guide

The aws_auditmanager_evidence table in Steampipe provides you with information about evidence resources within AWS Audit Manager. This table allows you, as a DevOps engineer, to query evidence-specific details, including the source, collection method, and associated metadata. You can utilize this table to gather insights on evidence, such as the evidence state, evidence by type, and the AWS resource from which the evidence was collected. The schema outlines the various attributes of the evidence for you, including the evidence id, assessment id, control set id, evidence folder id, and associated tags.

Examples

Basic info

Explore the various pieces of evidence collected in AWS Audit Manager to understand their association with different control sets and IAM identities. This can help in assessing the compliance status of your AWS resources and identifying areas that may need attention.

select
  id,
  arn,
  evidence_folder_id,
  evidence_by_type,
  iam_id,
  control_set_id
from
  aws_auditmanager_evidence;
select
  id,
  arn,
  evidence_folder_id,
  evidence_by_type,
  iam_id,
  control_set_id
from
  aws_auditmanager_evidence;

Get evidence count by evidence folder

Analyze the distribution of evidence across different folders in AWS Audit Manager to understand the workload and prioritize accordingly. This can help in efficiently managing and reviewing the collected evidence.

select
  evidence_folder_id,
  count(id) as evidence_count
from
  aws_auditmanager_evidence
group by
  evidence_folder_id;
select
  evidence_folder_id,
  count(id) as evidence_count
from
  aws_auditmanager_evidence
group by
  evidence_folder_id;
title description
Steampipe Table: aws_auditmanager_evidence_folder - Query AWS Audit Manager Evidence Folders using SQL
Allows users to query AWS Audit Manager Evidence Folders to get comprehensive details about the evidence folders in the AWS Audit Manager service.

Table: aws_auditmanager_evidence_folder - Query AWS Audit Manager Evidence Folders using SQL

The AWS Audit Manager Evidence Folders are used to organize and store evidence collected for assessments. This evidence can be automatically collected by AWS Audit Manager or manually uploaded by users. The evidence folders help in managing compliance audits and providing detailed proof of how the data is being handled within the AWS environment.

Table Usage Guide

The aws_auditmanager_evidence_folder table in Steampipe provides you with information about evidence folders within AWS Audit Manager. This table allows you, as a DevOps engineer, to query evidence folder-specific details, including the ID, ARN, name, date created, and associated metadata. You can utilize this table to gather insights on evidence folders, such as the total count of evidence in the folder, the status of the evidence, verification of evidence source, and more. The schema outlines the various attributes of the evidence folder for you, including the evidence folder ID, ARN, creation date, and associated tags.

Examples

Basic info

Explore which evidence folders exist within your AWS Audit Manager to better manage and assess your compliance controls and evidence. This can help you identify areas where you might need to gather additional evidence or focus your auditing efforts.

select
  name,
  id,
  arn,
  assessment_id,
  control_set_id,
  control_id,
  total_evidence
from
  aws_auditmanager_evidence_folder;
select
  name,
  id,
  arn,
  assessment_id,
  control_set_id,
  control_id,
  total_evidence
from
  aws_auditmanager_evidence_folder;

Count the number of evidence folders by assessment ID

Explore how many evidence folders are associated with each assessment in your AWS Audit Manager. This is useful for understanding the volume of evidence collected for each audit, aiding in audit management and review processes.

select
  assessment_id,
  count(id) as evidence_folder_count
from
  aws_auditmanager_evidence_folder
group by
  assessment_id;
select
  assessment_id,
  count(id) as evidence_folder_count
from
  aws_auditmanager_evidence_folder
group by
  assessment_id;
title description
Steampipe Table: aws_auditmanager_framework - Query AWS Audit Manager Framework using SQL
Allows users to query AWS Audit Manager Frameworks

Table: aws_auditmanager_framework - Query AWS Audit Manager Framework using SQL

The AWS Audit Manager Framework is a feature of AWS Audit Manager that helps you continuously audit your AWS usage to simplify your compliance with regulations and industry standards. It automates evidence collection to enable you to scale your audit capability in AWS, reducing the effort needed to assess risk and compliance. This feature is especially useful for organizations that need to maintain a consistent audit process across various AWS services.

Table Usage Guide

The aws_auditmanager_framework table in Steampipe provides you with information about frameworks within AWS Audit Manager. This table allows you, as a DevOps engineer, to query framework-specific details, including the framework's ARN, ID, type, and associated metadata. You can utilize this table to gather insights on frameworks, such as the number of controls associated with each framework, the compliance type, and more. The schema outlines the various attributes of the Audit Manager Framework for you, including the framework ARN, creation date, last updated date, and associated tags.

Examples

Basic info

Explore which audit frameworks are currently implemented in your AWS environment. This can help in assessing your existing auditing strategies and identifying areas for improvement.

select
  name,
  arn,
  id,
  type
from
  aws_auditmanager_framework;
select
  name,
  arn,
  id,
  type
from
  aws_auditmanager_framework;

List custom audit manager frameworks

Uncover the details of your custom audit frameworks within AWS Audit Manager. This query is useful for understanding the scope and details of your custom configurations, aiding in the management and review of your audit frameworks.

select
  name,
  arn,
  id,
  type
from
  aws_auditmanager_framework
where
  type = 'Custom';
select
  name,
  arn,
  id,
  type
from
  aws_auditmanager_framework
where
  type = 'Custom';
title description
Steampipe Table: aws_availability_zone - Query EC2 Availability Zones using SQL
Allows users to query EC2 Availability Zones in AWS, providing details such as zone ID, name, region, and state.

Table: aws_availability_zone - Query EC2 Availability Zones using SQL

The AWS EC2 Availability Zones are isolated locations within data center regions from which public cloud services originate and operate. They are designed to provide stable, secure, and high availability services by allowing users to run instances in several locations. These zones are an essential component for fault-tolerant and highly available infrastructure design, enabling applications to continue functioning despite a failure within a single location.

Table Usage Guide

The aws_availability_zone table in Steampipe provides you with information about Availability Zones within AWS Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query zone-specific details, including zone ID, name, region, and state. You can utilize this table to gather insights on zones, such as zones that are currently available, the regions associated with each zone, and more. The schema outlines the various attributes of the Availability Zone for you, including the zone ID, zone name, region name, and zone state.

Examples

Availability zone info

Analyze the settings to understand the distribution and types of availability zones in different regions. This can aid in planning resource deployment for optimal performance and redundancy.

select
  name,
  zone_id,
  zone_type,
  group_name,
  region_name
from
  aws_availability_zone;
select
  name,
  zone_id,
  zone_type,
  group_name,
  region_name
from
  aws_availability_zone;

Count of availability zone per region

Determine the distribution of availability zones across different regions to understand the geographical spread of your AWS resources.

select
  region_name,
  count(name) as zone_count
from
  aws_availability_zone
group by
  region_name;
select
  region_name,
  count(name) as zone_count
from
  aws_availability_zone
group by
  region_name;

List of AWS availability zones which are not enabled in the account

Identify the AWS availability zones that are not currently enabled within your account. This is useful for understanding which zones you may want to opt into for increased redundancy or global coverage.

select
  name,
  zone_id,
  region_name,
  opt_in_status
from
  aws_availability_zone
where
  opt_in_status = 'not-opted-in';
select
  name,
  zone_id,
  region_name,
  opt_in_status
from
  aws_availability_zone
where
  opt_in_status = 'not-opted-in';
title description
Steampipe Table: aws_backup_framework - Query AWS Backup Frameworks using SQL
Allows users to query AWS Backup Frameworks and retrieve comprehensive data about each backup plan, including its unique ARN, version, creation and deletion dates, and more.

Table: aws_backup_framework - Query AWS Backup Frameworks using SQL

The AWS Backup service provides a centralized framework to manage and automate data backup across AWS services. It helps you to meet business and regulatory backup compliance requirements by simplifying the management and reducing the cost of backup operations. AWS Backup offers a cost-effective, fully managed, policy-based backup solution, protecting your data in AWS services.

Table Usage Guide

The aws_backup_framework table in Steampipe provides you with information about each backup framework within AWS Backup service. This table empowers you, as a DevOps engineer, to query backup plan-specific details, including the backup plan's ARN, version, creation date, deletion date, and more. You can utilize this table to gather insights on backup plans, such as their status, associated rules, and other relevant metadata. The schema outlines the various attributes of the backup plan for you, including the backup plan ARN, version, creation and deletion dates, and more.

Examples

Basic info

This query is used to gain insights into the deployment status, creation time, and other details of your AWS backup frameworks. The practical application is to understand the configuration and status of your backup systems for effective management and troubleshooting.

select
  account_id,
  arn,
  creation_time,
  deployment_status,
  framework_controls,
  framework_description,framework_name,
  framework_status,
  number_of_controls,
  region,
  tags
from
  aws_backup_framework;
select
  account_id,
  arn,
  creation_time,
  deployment_status,
  framework_controls,
  framework_description,framework_name,
  framework_status,
  number_of_controls,
  region,
  tags
from
  aws_backup_framework;

List AWS frameworks created within the last 90 days

Determine the AWS frameworks that have been established within the past three months. This is beneficial for understanding recent changes and additions to your AWS environment, allowing you to stay updated on your current configurations and controls.

select
  framework_name,
  arn,
  creation_time,
  number_of_controls
from
  aws_backup_framework
where
  creation_time >= (current_date - interval '90' day)
order by
  creation_time;
select
  framework_name,
  arn,
  creation_time,
  number_of_controls
from
  aws_backup_framework
where
  creation_time >= date('now','-90 day')
order by
  creation_time;

List frameworks that are using a specific control (BACKUP_RESOURCES_PROTECTED_BY_BACKUP_VAULT_LOCK)

Determine the frameworks which are utilizing a specific control for resource protection in a backup vault. This is useful for identifying potential areas of risk or for compliance monitoring.

select
  framework_name
from
  aws_backup_framework,
  jsonb_array_elements(framework_controls) as controls
where
  controls ->> 'ControlName' = 'BACKUP_RESOURCES_PROTECTED_BY_BACKUP_VAULT_LOCK';
select
  framework_name
from
  aws_backup_framework
where
  json_extract(framework_controls, '$[*].ControlName') = 'BACKUP_RESOURCES_PROTECTED_BY_BACKUP_VAULT_LOCK';

List control names and scopes for each framework

Determine the areas in which specific control names and scopes are applied within each framework. This is particularly useful for understanding the scope of control within AWS backup frameworks, aiding in effective resource management and compliance. This query will return an empty control scope if the control doesn't apply to a specific AWS resource type. Otherwise, the query will list the control name and the AWS resource type.

select
  framework_name,
  controls ->> 'ControlName' as control_name,
  control_scope
from
  aws_backup_framework,
  jsonb_array_elements(framework_controls) as controls,
  json_array_elements_text(coalesce(controls -> 'ControlScope' ->> 'ComplianceResourceTypes', '[""]')::json) as control_scope
where
  framework_name = 'framework_name';
select
  framework_name,
  json_extract(controls.value, '$.ControlName') as control_name,
  control_scope.value as control_scope
from
  aws_backup_framework,
  json_each(framework_controls) as controls,
  json_each(json(coalesce(json_extract(controls.value, '$.ControlScope.ComplianceResourceTypes'), '[""]'))) as control_scope
where
  framework_name = 'framework_name';

List framework controls that have non-compliant resources

Determine the areas in which framework controls are not compliant with the rules. This can be useful for identifying and rectifying non-compliant resources to ensure adherence to organizational policies and standards.

select
  rule_name,
  compliance_result -> 'Compliance' ->> 'ComplianceType' as compliance_type,
  compliance_result -> 'Compliance' -> 'ComplianceContributorCount' ->> 'CappedCount' as count_of_noncompliant_resources
from
  aws_config_rule
inner join
(
  -- The sub-query will create the AWS Config rule name from information stored in the AWS Backup framework table.
  select
    case when framework_information.control_scope = '' then concat(framework_information.control_name, '-', framework_information.framework_uuid)
    else concat(upper(framework_information.control_scope), '-', framework_information.control_name, '-', framework_information.framework_uuid)
    end as rule_name
  from
  (
    select
      framework_name,
      controls ->> 'ControlName' as control_name,
      control_scope,
      right(arn, 36) as framework_uuid
    from
      aws_backup_framework,
      jsonb_array_elements(framework_controls) as controls,
      json_array_elements_text(coalesce(controls -> 'ControlScope' ->> 'ComplianceResourceTypes', '[""]')::json) as control_scope
  ) as framework_information
) as backup_framework
on
  aws_config_rule.name = backup_framework.rule_name,
  jsonb_array_elements(compliance_by_config_rule) as compliance_result
where
  compliance_result -> 'Compliance' ->> 'ComplianceType' = 'NON_COMPLIANT';
select
  rule_name,
  json_extract(compliance_result, '$.Compliance.ComplianceType') as compliance_type,
  json_extract(compliance_result, '$.Compliance.ComplianceContributorCount.CappedCount') as count_of_noncompliant_resources
from
  aws_config_rule
join
(
  -- The sub-query will create the AWS Config rule name from information stored in the AWS Backup framework table.
  select
    case when control_scope = '' then control_name || '-' || framework_uuid
    else upper(control_scope) || '-' || control_name || '-' || framework_uuid
    end as rule_name
  from
  (
    select
      framework_name,
      json_extract(controls, '$.ControlName') as control_name,
      control_scope,
      substr(arn, -36) as framework_uuid
    from
      aws_backup_framework,
      json_each(framework_controls) as controls,
      json_each(coalesce(json_extract(controls, '$.

List framework controls that are compliant

Identify the compliant framework controls within your AWS Config rules. This allows you to gain insights into your compliance status and helps in maintaining adherence to regulatory standards.

select
  rule_name,
  compliance_result -> 'Compliance' ->> 'ComplianceType' as compliance_type
from
  aws_config_rule
inner join
(
  -- The sub-query will create the AWS Config rule name from information stored in the AWS Backup framework table.
  select
    case when framework_information.control_scope = '' then concat(framework_information.control_name, '-', framework_information.framework_uuid)
    else concat(upper(framework_information.control_scope), '-', framework_information.control_name, '-', framework_information.framework_uuid)
    end as rule_name
  from
  (
    select
      framework_name,
      controls ->> 'ControlName' as control_name,
      control_scope,
      right(arn, 36) as framework_uuid
    from
      aws_backup_framework,
      jsonb_array_elements(framework_controls) as controls,
      json_array_elements_text(coalesce(controls -> 'ControlScope' ->> 'ComplianceResourceTypes', '[""]')::json) as control_scope
  ) as framework_information
) as backup_framework
on
  aws_config_rule.name = backup_framework.rule_name,
  jsonb_array_elements(compliance_by_config_rule) as compliance_result
where
  compliance_result -> 'Compliance' ->> 'ComplianceType' = 'COMPLIANT';
select
  rule_name,
  json_extract(compliance_result, '$.Compliance.ComplianceType') as compliance_type
from
  aws_config_rule
inner join
(
  -- The sub-query will create the AWS Config rule name from information stored in the AWS Backup framework table.
  select
    case when framework_information.control_scope = '' then framework_information.control_name || '-' || framework_information.framework_uuid
    else upper(framework_information.control_scope) || '-' || framework_information.control_name || '-' || framework_information.framework_uuid
    end as rule_name
  from
  (
    select
      framework_name,
      json_extract(controls, '$.ControlName') as control_name,
      control_scope,
      substr(arn, -36) as framework_uuid
    from
      aws_backup_framework,
      json_each(framework_controls) as controls,
      json_each(coalesce(json_extract(controls, '$.ControlScope.ComplianceResourceTypes'), '[""]')) as control_scope
title description
Steampipe Table: aws_backup_job - Query AWS Backup Jobs using SQL
Allows users to query AWS Backup Jobs, providing detailed information about the status of backups jobs.

Table: aws_backup_jobs - Query AWS Backup Jobs using SQL

The AWS Backup Jobs are a part of the AWS Backup service, which provides users with a fully managed solution for data protection. These jobs are used to copy data from various sources to AWS Backup Vaults. A backup job can be created manually or automated using a Backup Plan, which specifies the source data set, the target Backup Vault, the backup frequency, and the retention period.

Table Usage Guide

The aws_backup_job table in Steampipe provides detailed information about backup jobs within AWS Backup. This table allows you to query specific details of each job, such as its state, target vault name, ARN, recovery points, and associated metadata. By utilizing this table, you can gain insights into backup jobs, including the number of successful or failed jobs, the creation date of each job, and more. The schema outlines various attributes of the backup job, including the target vault name, ARN, creation date, job state, and associated tags.

Examples

Basic Info

Track the status of your AWS backup jobs, including their job ID, recovery points, and the backup vaults they were created in. This feature is especially valuable for disaster recovery purposes, as it allows you to monitor the progress and status of your backup jobs. By keeping tabs on your backup jobs, you can ensure the safety and availability of your important data.

select
  job_id,
  recovery_point_arn,
  backup_vault_arn,
  status
from
  aws_backup_job
select
  job_id,
  recovery_point_arn,
  backup_vault_arn,
  status
from
  aws_backup_job;

List failed backup jobs

Identify backup jobs that have failed to create a recovery point. This information can be valuable in identifying backup processes that may need maintenance or review.

select
  job_id,
  recovery_point_arn,
  backup_vault_arn,
  status,
  current_date
from
  aws_backup_job
where
  status != 'COMPLETED'
  and creation_date > current_date
select
  job_id,
  recovery_point_arn,
  backup_vault_arn,
  status
from
  aws_backup_job
where
  status != 'COMPLETED'
  and creation_date > current_date;

List backup jobs by resource type

Monitor the number of your AWS backup jobs by resource type.

select
  resource_type,
  count(*)
from
  aws_backup_job
group by
  resource_type
select
  resource_type,
  count(*)
from
  aws_backup_job
group by
  resource_type;
title description
Steampipe Table: aws_backup_plan - Query AWS Backup Plan using SQL
Allows users to query AWS Backup Plan data, providing detailed information about each backup plan created within an AWS account. Useful for DevOps engineers to monitor and manage backup strategies and ensure data recovery processes are in place.

Table: aws_backup_plan - Query AWS Backup Plan using SQL

The AWS Backup Plan is a policy-based solution for defining, scheduling, and automating the backup activities of AWS resources. It enables you to centralize and automate data protection across AWS services, simplifying management and reducing operational costs. With AWS Backup, you can customize where and how you backup your resources, providing flexibility and control over your data protection strategy.

Table Usage Guide

The aws_backup_plan table in Steampipe provides you with information about each backup plan within AWS Backup. This table allows you, as a DevOps engineer, to query backup plan-specific details, including backup options, creation and version details, and associated metadata. You can utilize this table to gather insights on backup plans, such as the backup frequency, backup window, lifecycle of the backup, and more. The schema outlines the various attributes of the backup plan for you, including the backup plan ARN, creation date, version, and associated tags.

Examples

Basic Info

Assess the elements within your AWS backup plans to understand when they were created and when they were last executed. This can help in monitoring and managing your backup strategies effectively.

select
  name,
  backup_plan_id,
  arn,
  creation_date,
  last_execution_date
from
  aws_backup_plan;
select
  name,
  backup_plan_id,
  arn,
  creation_date,
  last_execution_date
from
  aws_backup_plan;

List plans older than 90 days

Determine the areas in which backup plans have been inactive for more than 90 days. This can aid in identifying outdated or potentially unnecessary backup plans, facilitating better resource management.

select
  name,
  backup_plan_id,
  arn,
  creation_date,
  last_execution_date
from
  aws_backup_plan
where
  creation_date <= (current_date - interval '90' day)
order by
  creation_date;
select
  name,
  backup_plan_id,
  arn,
  creation_date,
  last_execution_date
from
  aws_backup_plan
where
  creation_date <= date('now','-90 day')
order by
  creation_date;

List plans that were deleted in the last 7 days

Determine the areas in which backup plans were recently removed within the AWS environment to keep track of changes and maintain security standards.

select
  name,
  arn,
  creation_date,
  deletion_date
from
  aws_backup_plan
where
  deletion_date > current_date - 7
order by
  deletion_date;
select
  name,
  arn,
  creation_date,
  deletion_date
from
  aws_backup_plan
where
  deletion_date > date('now','-7 day')
order by
  deletion_date;
title description
Steampipe Table: aws_backup_protected_resource - Query AWS Backup Protected Resources using SQL
Allows users to query AWS Backup Protected Resources to retrieve detailed information about the resources that are backed up by AWS Backup service.

Table: aws_backup_protected_resource - Query AWS Backup Protected Resources using SQL

AWS Backup Protected Resources are the critical data, system configurations, and applications that are safeguarded by AWS Backup. This service provides a fully managed, policy-based backup solution, simplifying the process of backing up data across AWS services. It offers a centralized place to manage backups, audit and monitor activities, and apply retention policies, thus enhancing data protection and compliance.

Table Usage Guide

The aws_backup_protected_resource table in Steampipe provides you with information about the resources that are backed up by AWS Backup service. This table allows you, as a DevOps engineer, security analyst, or system administrator, to query resource-specific details, including resource ARN, type, backup plan ID, and the last backup time. You can utilize this table to gather insights on backed up resources, such as retrieving the last backup time, identifying resources that are not backed up, verifying the backup plan associated with each resource, and more. The schema outlines the various attributes of the backed up resource, including the resource ARN, resource type, backup plan ID, and last backup time for you.

Examples

Basic Info

Discover the segments that are protected by AWS Backup service and when they were last backed up. This is useful for maintaining data recovery readiness and ensuring that critical resources are sufficiently protected.

select
  resource_arn,
  resource_type,
  last_backup_time
from
  aws_backup_protected_resource;
select
  resource_arn,
  resource_type,
  last_backup_time
from
  aws_backup_protected_resource;

List EBS volumes that are backed up

Determine the areas in which EBS volumes are backed up, allowing you to understand the reach of your backup strategy and ensure no critical data is left unprotected.

select
  resource_arn,
  resource_type,
  last_backup_time
from
  aws_backup_protected_resource
where
  resource_type = 'EBS';
select
  resource_arn,
  resource_type,
  last_backup_time
from
  aws_backup_protected_resource
where
  resource_type = 'EBS';
title description
Steampipe Table: aws_backup_recovery_point - Query AWS Backup Recovery Points using SQL
Allows users to query AWS Backup Recovery Points to gather comprehensive information about each recovery point within an AWS Backup vault.

Table: aws_backup_recovery_point - Query AWS Backup Recovery Points using SQL

The AWS Backup Recovery Point is a component of AWS Backup, a fully managed backup service that makes it easy to centralize and automate the backup of data across AWS services. This resource, the recovery point, is an entity that contains all the metadata that AWS Backup needs to recover a protected resource, such as an Amazon RDS database, an Amazon EBS volume, or an Amazon S3 bucket. The recovery point is created after a successful backup of a protected resource.

Table Usage Guide

The aws_backup_recovery_point table in Steampipe provides you with information about each recovery point within an AWS Backup vault. This table allows you, as a DevOps engineer or system administrator, to query recovery point-specific details, including the backup vault where the recovery point is stored, the source of the backup, the state of the recovery point, and associated metadata. You can utilize this table to gather insights on recovery points, such as identifying unencrypted recovery points, verifying backup completion status, and more. The schema outlines the various attributes of the recovery point for you, including the recovery point ARN, creation date, backup size, and associated tags.

Note: The value in the tags column will be populated only if its resource type has a checkmark for Full AWS Backup management as per AWS Backup docs. This means the recovery point ARN must match the pattern arn:aws:backup:[a-z0-9\-]+:[0-9]{12}:recovery-point:.*

Examples

Basic Info

Discover the segments that are significant in your AWS backup recovery points. This can be beneficial for assessing the status and type of resources within your backup vaults, which can help in managing your backup strategy effectively.

select
  backup_vault_name,
  recovery_point_arn,
  resource_type,
  status
from
  aws_backup_recovery_point;
select
  backup_vault_name,
  recovery_point_arn,
  resource_type,
  status
from
  aws_backup_recovery_point;

List encrypted recovery points

Identify instances where your recovery points are encrypted to ensure data security and compliance. This query is useful to maintain a secure and compliant data backup system by pinpointing the specific locations where encryption is applied.

select
  backup_vault_name,
  recovery_point_arn,
  resource_type,
  status,
  is_encrypted
from
  aws_backup_recovery_point
where
  is_encrypted;
select
  backup_vault_name,
  recovery_point_arn,
  resource_type,
  status,
  is_encrypted
from
  aws_backup_recovery_point
where
  is_encrypted = 1;

Get associated tags for the targeted Recovery Points EC2, EBS and S3 resource types

Retrieving metadata, in the form of tags, for recovery points associated with three resource types - EC2 instances, EBS volumes, and S3 buckets. Tags are key-value pairs that provide valuable information about AWS resources.

select
  r.backup_vault_name as backup_vault_name,
  r.recovery_point_arn as recovery_point_arn,
  r.resource_type as resource_type,
case
    when r.resource_type = 'EBS' then (
      select tags from aws_ebs_snapshot where arn = concat(
        (string_to_array(r.recovery_point_arn, '::'))[1],
        ':',
        r.account_id,
        ':',
        (string_to_array(r.recovery_point_arn, '::'))[2]
      )
    )
    when r.resource_type = 'EC2' then (
      select tags from aws_ec2_ami where image_id = (string_to_array(r.recovery_point_arn, '::image/'))[2]
    )
    when r.resource_type in ('S3', 'EFS') then r.tags
end as tags,
  r.region,
  r.account_id
from
  aws_backup_recovery_point as r;
select
  r.backup_vault_name as backup_vault_name,
  r.recovery_point_arn as recovery_point_arn,
  r.resource_type as resource_type,
  case
    when r.resource_type = 'EBS' then (
      select tags from aws_ebs_snapshot where arn = substr(r.recovery_point_arn, instr(r.recovery_point_arn, '::') + 2)
    )
    when r.resource_type = 'EC2' then (
      select tags from aws_ec2_ami where image_id = substr(r.recovery_point_arn, instr(r.recovery_point_arn, '::image/') + 8)
    )
    when r.resource_type in ('S3', 'EFS') then r.tags
  end as tags,
  r.region,
  r.account_id
from
  aws_backup_recovery_point as r;
title description
Steampipe Table: aws_backup_report_plan - Query AWS Backup Report Plan using SQL
Allows users to query AWS Backup Report Plan data, including details about backup jobs, recovery points, and backup vaults.

Table: aws_backup_report_plan - Query AWS Backup Report Plan using SQL

The AWS Backup Report Plan is a feature within the AWS Backup service. It allows you to create, manage, and delete report plans for your backup jobs, recovery point, and restore jobs. These report plans can be used to compile and send reports about your backup activities, helping you to effectively monitor and manage your data protection strategy.

Table Usage Guide

The aws_backup_report_plan table in Steampipe provides you with information about the report plans within the AWS Backup service. This table allows you, as a DevOps engineer, to query report plan-specific details, including report delivery channel configurations, report jobs, and associated metadata. You can utilize this table to gather insights on report plans, such as report plan status, configurations, and more. The schema outlines the various attributes of the report plan for you, including the report plan ARN, creation time, report delivery channel, and associated tags.

Examples

Basic Info

Explore the status and details of your AWS backup report plans to understand when they were last executed and their current deployment status. This can help you assess the effectiveness of your backup strategies and identify any potential issues.

select
  arn,
  description,
  creation_time,
  last_attempted_execution_time,
  deployment_status
from
  aws_backup_report_plan;
select
  arn,
  description,
  creation_time,
  last_attempted_execution_time,
  deployment_status
from
  aws_backup_report_plan;

List reports plans older than 90 days

Identify instances where AWS backup report plans have been in place for over 90 days. This can be useful for reviewing and managing your backup strategies, ensuring they remain up-to-date and effective.

select
  arn,
  description,
  creation_time,
  last_attempted_execution_time,
  deployment_status
from
  aws_backup_report_plan
where
  creation_time <= (current_date - interval '90' day)
order by
  creation_time;
select
  arn,
  description,
  creation_time,
  last_attempted_execution_time,
  deployment_status
from
  aws_backup_report_plan
where
  creation_time <= date('now','-90 day')
order by
  creation_time;

List report plans that were executed successfully in the last 7 days

Explore which report plans have been successfully executed in the past week. This can be useful to assess the effectiveness of your backup strategy and identify areas for improvement.

select
  arn,
  description,
  creation_time,
  last_attempted_execution_time,
  deployment_status
from
  aws_backup_report_plan
where
  last_successful_execution_time > current_date - 7
order by
  last_successful_execution_time;
select
  arn,
  description,
  creation_time,
  last_attempted_execution_time,
  deployment_status
from
  aws_backup_report_plan
where
  last_successful_execution_time > date('now','-7 days')
order by
  last_successful_execution_time;

Get the report settings for a particular report plan

Determine the configuration details of a specific report plan to understand its structure and settings. This can be useful for auditing purposes, or when planning to modify or replicate the report plan.

select
  arn,
  description,
  creation_time,
  report_setting ->> 'ReportTemplate' as report_template,
  report_setting ->> 'Accounts' as accounts,
  report_setting ->> 'FrameworkArns' as framework_arns,
  report_setting ->> 'NumberOfFrameworks' as number_of_frameworks,
  report_setting ->> 'OrganizationUnits' as organization_units,
  report_setting ->> 'Regions' as regions
from
  aws_backup_report_plan
where
  title = 'backup_jobs_report_12_07_2023';
select
  arn,
  description,
  creation_time,
  json_extract(report_setting, '$.ReportTemplate') as report_template,
  json_extract(report_setting, '$.Accounts') as accounts,
  json_extract(report_setting, '$.FrameworkArns') as framework_arns,
  json_extract(report_setting, '$.NumberOfFrameworks') as number_of_frameworks,
  json_extract(report_setting, '$.OrganizationUnits') as organization_units,
  json_extract(report_setting, '$.Regions') as regions
from
  aws_backup_report_plan
where
  title = 'backup_jobs_report_12_07_2023';

List successfully deployed report plans

Identify instances where report plans have been successfully deployed. This is useful for monitoring the status and efficiency of backup strategies within your AWS environment.

select
  arn,
  description,
  creation_time,
  last_attempted_execution_time,
  deployment_status
from
  aws_backup_report_plan
where
  deployment_status = 'COMPLETED';
select
  arn,
  description,
  creation_time,
  last_attempted_execution_time,
  deployment_status
from
  aws_backup_report_plan
where
  deployment_status = 'COMPLETED';

Get the report delivery channel details for a particular report plan

Explore the specifics of a report delivery method for a given backup report plan. This allows you to understand where and in what format the report will be delivered, which can be useful for managing and organizing your backup reports.

select
  arn,
  description,
  creation_time,
  report_delivery_channel ->> 'Formats' as formats,
  report_delivery_channel ->> 'S3BucketName' as s3_bucket_name,
  report_delivery_channel ->> 'S3KeyPrefix' as s3_key_prefix
from
  aws_backup_report_plan
where
  title = 'backup_jobs_report_12_07_2023';
select
  arn,
  description,
  creation_time,
  json_extract(report_delivery_channel, '$.Formats') as formats,
  json_extract(report_delivery_channel, '$.S3BucketName') as s3_bucket_name,
  json_extract(report_delivery_channel, '$.S3KeyPrefix') as s3_key_prefix
from
  aws_backup_report_plan
where
  title = 'backup_jobs_report_12_07_2023';
title description
Steampipe Table: aws_backup_selection - Query AWS Backup Selections using SQL
Allows users to query AWS Backup Selections to obtain detailed information about the backup selection resources within AWS Backup service.

Table: aws_backup_selection - Query AWS Backup Selections using SQL

The AWS Backup Selection is a component of AWS Backup, a fully managed backup service that simplifies the backup of data across AWS services. It allows you to automate and centrally manage backups, enforcing policies and monitoring backup activities for AWS resources. The selection includes a list of resources to be backed up, identified by an array of ARNs, as well as a backup plan to specify how AWS Backup handles backup and restore operations.

Table Usage Guide

The aws_backup_selection table in Steampipe provides you with comprehensive information about backup selection resources within the AWS Backup service. This table allows you, as a DevOps engineer, security professional, or system administrator, to query backup selection-specific details, including the selection's ARN, backup plan ID, creation and modification dates, and associated creator request ID. You can utilize this table to gather insights on backup selections, such as identifying backup selections associated with specific backup plans, tracking creation and modification times of backup selections, and more. The schema outlines the various attributes of the backup selection for you, including the backup selection ARN, backup plan ID, creation date, creator request ID, and associated tags.

Examples

Basic Info

Explore which AWS backup plans are associated with specific IAM roles and regions. This can be useful for auditing and managing your AWS resources efficiently.

select
  selection_name,
  backup_plan_id,
  iam_role_arn,
  region,
  account_id
from
  aws_backup_selection;
select
  selection_name,
  backup_plan_id,
  iam_role_arn,
  region,
  account_id
from
  aws_backup_selection;

List EBS volumes that are in a backup plan

Identify the EBS volumes included in a backup plan to ensure crucial data is secured and maintained. This is essential for data recovery planning and to minimize potential data loss.

with filtered_data as (
  select
    backup_plan_id,
    jsonb_agg(r) as assigned_resource
  from
    aws_backup_selection,
    jsonb_array_elements(resources) as r
  group by backup_plan_id
)
select
  v.volume_id,
  v.region,
  v.account_id
from
  aws_ebs_volume as v
  join filtered_data t on t.assigned_resource ?| array[v.arn];
Error: SQLite does not support the ?| operator used in array operations.
title description
Steampipe Table: aws_backup_vault - Query AWS Backup Vaults using SQL
Allows users to query AWS Backup Vaults, providing detailed information about each backup vault, including its name, ARN, recovery points, and more.

Table: aws_backup_vault - Query AWS Backup Vaults using SQL

The AWS Backup Vault is a secured place where AWS Backup stores backup data. It provides a scalable, fully managed, policy-based resource for managing and protecting data across AWS services. It is designed to simplify data protection, enable regulatory compliance, and save costs by eliminating the need to create and manage custom scripts and manual processes.

Table Usage Guide

The aws_backup_vault table in Steampipe provides you with information about backup vaults within AWS Backup. This table allows you, as a DevOps engineer, to query vault-specific details, including the vault name, ARN, number of recovery points, and associated metadata. You can utilize this table to gather insights on backup vaults, such as the number of recovery points for each vault, the creation date of each vault, and more. The schema outlines the various attributes of the backup vault for you, including the vault name, ARN, creation date, last resource backup time, and associated tags.

Examples

Basic Info

Uncover the details of your AWS backup vaults, including their names, unique identifiers, and the dates they were created. This can be particularly useful for auditing purposes, allowing you to keep track of your resources and their creation timelines.

select
  name,
  arn,
  creation_date
from
  aws_backup_vault;
select
  name,
  arn,
  creation_date
from
  aws_backup_vault;

List vaults older than 90 days

Identify backup vaults that have been established for over 90 days. This can be beneficial in assessing long-standing storage resources that may require maintenance or review.

select
  name,
  arn,
  creation_date
from
  aws_backup_vault
where
  creation_date <= (current_date - interval '90' day)
order by
  creation_date;
select
  name,
  arn,
  creation_date
from
  aws_backup_vault
where
  creation_date <= date('now','-90 day')
order by
  creation_date;

List vaults that do not prevent the deletion of backups in the backup vault

Determine the areas in which your backup vaults may be at risk, specifically those that do not have policies in place to prevent the deletion of backups. This query is useful in identifying potential vulnerabilities and ensuring the safety of your data.

select
  name
from
  aws_backup_vault,
  jsonb_array_elements(policy -> 'Statement') as s
where
  s ->> 'Principal' = '*'
  and s ->> 'Effect' != 'Deny'
  and s ->> 'Action' like '%DeleteBackupVault%';
select
  name
from
  aws_backup_vault
where
  json_extract(policy, '$.Statement[*].Principal') = '*'
  and json_extract(policy, '$.Statement[*].Effect') != 'Deny'
  and json_extract(policy, '$.Statement[*].Action') like '%DeleteBackupVault%';

List policy details for backup vaults

Determine the areas in which your AWS backup vault policies are applied. This helps in understanding the security measures in place for your backup vaults, assisting in maintaining data integrity and safety.

select
  name,
  jsonb_pretty(policy) as policy,
  jsonb_pretty(policy_std) as policy_std
from
  aws_backup_vault;
select
  name,
  policy,
  policy_std
from
  aws_backup_vault;
title description
Steampipe Table: aws_cloudcontrol_resource - Query AWS Cloud Control API Resource using SQL
Allows users to query AWS Cloud Control API Resource data, providing detailed insights into resource properties, types, and statuses.

Table: aws_cloudcontrol_resource - Query AWS Cloud Control API Resource using SQL

The AWS Cloud Control API Resource is a service that allows you to manage your cloud resources in a programmatic way. It provides a unified, consistent set of application programming interfaces (APIs) and extends the capabilities of AWS CloudFormation to support all AWS resource types. This service allows you to create, read, update, delete, and list resources across multiple AWS services from a single API endpoint.

Table Usage Guide

The aws_cloudcontrol_resource table in Steampipe provides you with information about resources within the AWS Cloud Control API. This table allows you, as a DevOps engineer, to query resource-specific details, including resource properties, types, and statuses. You can utilize this table to gather insights on resources, such as the resource's specific properties, the type of the resource, and the current status of the resource. The schema outlines for you the various attributes of the AWS Cloud Control API resource, including the resource name, resource type, role ARN, and associated metadata.

Important Notes

  • In order to list resources, the type_name column must be specified. Some resources also require additional information, which is specified in the resource_model column. For more information on these resource types, please see Resources that require additional information.

  • In order to read a resource, the type_name and identifier columns must be specified. The identifier for each resource type is different, for more information on identifiers please see Identifying resources.

We recommend you use native Steampipe tables when available, but this table is helpful to query uncommon resources not yet supported.

Known limitations

  • AWS::S3::Bucket will only include detailed information if an identifier is provided. There is no way to determine the region of a bucket from the list result, so full information cannot be automatically hydrated.
  • Global resources like AWS::IAM::Role will return duplicate results per region. Specify region = 'us-east-1' (or similar) in the where clause to avoid.

For more information on other Cloud Control limitations and caveats, please see A deep dive into AWS Cloud Control for asset inventory.

Examples

List Lambda functions

Explore the Lambda functions within your AWS environment, focusing on aspects like their associated identifiers, regions, and runtime settings. This analysis can help in understanding the setup and distribution of your Lambda functions, which is crucial for optimizing resource allocation and troubleshooting.

select
  identifier,
  properties ->> 'Arn' as arn,
  properties ->> 'MemorySize' as memory_size,
  properties ->> 'Runtime' as runtime,
  region
from
  aws_cloudcontrol_resource
where
  type_name = 'AWS::Lambda::Function';
select
  identifier,
  json_extract(properties, '$.Arn') as arn,
  json_extract(properties, '$.MemorySize') as memory_size,
  json_extract(properties, '$.Runtime') as runtime,
  region
from
  aws_cloudcontrol_resource
where
  type_name = 'AWS::Lambda::Function';

List ELBv2 listeners for a load balancer

Explore the settings of specific listeners within a load balancer to understand their protocols, ports, and certificates, particularly useful for auditing and optimizing network traffic management. Listeners are a sub-resource, so can only be listed if passed the LoadBalancerArn data.

Warning: This does not work with multi-account in Steampipe. The query will be run against all accounts and Cloud Control returns a GeneralServiceException (rather than NotFound), making it difficult to handle.

Warning: If using multi-region in Steampipe then you MUST specify the region in the query. Otherwise, the request will be tried against each region. This would be slow anyway, but because Cloud Control returns a GeneralServiceException (rather than NotFound), we cannot handle it automatically.

select
  identifier,
  properties ->> 'AlpnPolicy' as alpn_policy,
  properties ->> 'Certificates' as certificates,
  properties ->> 'Port' as port,
  properties ->> 'Protocol' as protocol,
  region,
  account_id
from
  aws_cloudcontrol_resource
where
  type_name = 'AWS::ElasticLoadBalancingV2::Listener'
  and resource_model = '{"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/test-lb/4e695b8755d7003c"}'
  and region = 'us-east-1';
select
  identifier,
  json_extract(properties, '$.AlpnPolicy') as alpn_policy,
  json_extract(properties, '$.Certificates') as certificates,
  json_extract(properties, '$.Port') as port,
  json_extract(properties, '$.Protocol') as protocol,
  region,
  account_id
from
  aws_cloudcontrol_resource
where
  type_name = 'AWS::ElasticLoadBalancingV2::Listener'
  and resource_model = '{"LoadBalancerArn": "arn:aws:elasticloadbalancing:us-east-1:123456789012:loadbalancer/app/test-lb/4e695b8755d7003c"}'
  and region = 'us-east-1';

Get details for a CloudTrail trail

Determine the status and settings of a specific CloudTrail trail in your AWS environment. This can be essential in auditing and understanding your cloud resource configuration for security and compliance purposes. Get a single specific resource by setting the identifier.

select
  identifier,
  properties ->> 'IncludeGlobalServiceEvents' as include_global_service_events,
  properties ->> 'IsLogging' as is_logging,
  properties ->> 'IsMultiRegionTrail' as is_multi_region_trail,
  region
from
  aws_cloudcontrol_resource
where
  type_name = 'AWS::CloudTrail::Trail'
  and identifier = 'my-trail';
select
  identifier,
  json_extract(properties, '$.IncludeGlobalServiceEvents') as include_global_service_events,
  json_extract(properties, '$.IsLogging') as is_logging,
  json_extract(properties, '$.IsMultiRegionTrail') as is_multi_region_trail,
  region
from
  aws_cloudcontrol_resource
where
  type_name = 'AWS::CloudTrail::Trail'
  and identifier = 'my-trail';

List global resources using a single region

Determine the areas in which global resources are utilized through a single region. This is useful for managing and optimizing resource usage within a specified region in the AWS cloud environment. Global resources (e.g. AWS::IAM::Role) are returned by each region endpoint. When working with a multi-region configuration in Steampipe this creates duplicate rows. To avoid the duplicates, you can specify a region qualifier.

select
  properties ->> 'RoleName' as name
from
  aws_cloudcontrol_resource
where
  type_name = 'AWS::IAM::Role'
  and region = 'us-east-1'
order by
  name;
select
  json_extract(properties, '$.RoleName') as name
from
  aws_cloudcontrol_resource
where
  type_name = 'AWS::IAM::Role'
  and region = 'us-east-1'
order by
  name;
title description
Steampipe Table: aws_cloudformation_stack - Query AWS CloudFormation Stack using SQL
Allows users to query AWS CloudFormation Stack data, including stack name, status, creation time, and associated tags.

Table: aws_cloudformation_stack - Query AWS CloudFormation Stack using SQL

The AWS CloudFormation Stack is a service that allows you to manage and provision AWS resources in an orderly and predictable fashion. You can use AWS CloudFormation to leverage AWS products such as Amazon EC2, Amazon Elastic Block Store, Amazon SNS, Elastic Load Balancing, and Auto Scaling to build highly reliable, highly scalable, cost-effective applications without creating or configuring the underlying AWS infrastructure. With CloudFormation, you describe your desired resources in a template, and AWS CloudFormation takes care of provisioning and configuring those resources for you.

Table Usage Guide

The aws_cloudformation_stack table in Steampipe provides you with information about stacks within AWS CloudFormation. This table enables you as a DevOps engineer to query stack-specific details, including stack name, status, creation time, and associated tags. You can utilize this table to gather insights on stacks, such as stack status, stack resources, stack capabilities, and more. The schema outlines the various attributes of the CloudFormation stack for you, including stack ID, stack name, creation time, stack status, and associated tags.

Examples

Find the status of each cloudformation stack

Explore the current status of each AWS CloudFormation stack to monitor the health and progress of your infrastructure deployments. This can help in identifying any potential issues or failures in your stack deployments.

select
  name,
  id,
  status
from
  aws_cloudformation_stack;
select
  name,
  id,
  status
from
  aws_cloudformation_stack;

List of cloudformation stack where rollback is disabled

Discover the segments that have disabled rollback in their AWS CloudFormation stacks. This can be useful for identifying potential risk areas, as these stacks will not automatically revert to a previous state if an error occurs during stack operations.

select
  name,
  disable_rollback
from
  aws_cloudformation_stack
where
  disable_rollback;
select
  name,
  disable_rollback
from
  aws_cloudformation_stack
where
  disable_rollback = 1;

List of stacks where termination protection is not enabled

Discover the segments that have not enabled termination protection in their stacks. This is crucial to identify potential risk areas and ensure the safety of your resources.

select
  name,
  enable_termination_protection
from
  aws_cloudformation_stack
where
  not enable_termination_protection;
select
  name,
  enable_termination_protection
from
  aws_cloudformation_stack
where
  enable_termination_protection = 0;

Rollback configuration info for each cloudformation stack

Explore the settings of your AWS CloudFormation stacks to understand their rollback configurations, including how long they monitor for signs of trouble and what triggers a rollback. This can help optimize your stack management by adjusting these settings based on your operational needs.

select
  name,
  rollback_configuration ->> 'MonitoringTimeInMinutes' as monitoring_time_in_min,
  rollback_configuration ->> 'RollbackTriggers' as rollback_triggers
from
  aws_cloudformation_stack;
select
  name,
  json_extract(rollback_configuration, '$.MonitoringTimeInMinutes') as monitoring_time_in_min,
  json_extract(rollback_configuration, '$.RollbackTriggers') as rollback_triggers
from
  aws_cloudformation_stack;

Resource ARNs where notifications about stack actions will be sent

Determine the areas in which notifications related to stack actions will be sent. This is useful for managing and tracking changes in your AWS CloudFormation stacks.

select
  name,
  jsonb_array_elements_text(notification_arns) as resource_arns
from
  aws_cloudformation_stack;
select
  name,
  json_extract(json_each.value, ') as resource_arns
from
  aws_cloudformation_stack,
  json_each(notification_arns);
title description
Steampipe Table: aws_cloudformation_stack_resource - Query AWS CloudFormation Stack Resources using SQL
Allows users to query AWS CloudFormation Stack Resources, providing details about each resource within the stack, including its status, type, and associated metadata. This table is useful for managing and analyzing AWS CloudFormation resources.

Table: aws_cloudformation_stack_resource - Query AWS CloudFormation Stack Resources using SQL

The AWS CloudFormation Stack Resources are the AWS resources that are part of a stack. AWS CloudFormation simplifies the process of managing your AWS resources by treating all the resources as a single unit, called a stack. These resources can be created, updated, or deleted in a single operation, making it easier to manage and configure all the resources collectively.

Table Usage Guide

The aws_cloudformation_stack_resource table in Steampipe provides you with information about Stack Resources within AWS CloudFormation. This table allows you, as a DevOps engineer, to query resource-specific details, including the current status, resource type, and associated metadata. You can utilize this table to gather insights on resources, such as resource status, the type of resources used in the stack, and more. The schema outlines the various attributes of the Stack Resource for you, including the stack name, resource status, logical resource id, and physical resource id.

Examples

Basic info

Explore the status and type of resources within your AWS CloudFormation stack to better understand your stack's configuration and resource allocation. This allows for effective resource management and helps identify potential issues in your stack's setup.

select
  stack_name,
  stack_id,
  logical_resource_id,
  resource_type,
  resource_status
from
  aws_cloudformation_stack_resource;
select
  stack_name,
  stack_id,
  logical_resource_id,
  resource_type,
  resource_status
from
  aws_cloudformation_stack_resource;

List cloudformation stack resources having rollback disabled

Determine the areas in your AWS CloudFormation setup where rollback is disabled, allowing you to understand potential risk points in your infrastructure. This can be useful in identifying instances where a failure in stack creation or update could lead to resource inconsistencies.

select
  s.name,
  s.disable_rollback,
  r.logical_resource_id,
  r.resource_status
from
  aws_cloudformation_stack_resource as r,
  aws_cloudformation_stack as s
where
  r.stack_id = s.id
  and s.disable_rollback;
select
  s.name,
  s.disable_rollback,
  r.logical_resource_id,
  r.resource_status
from
  aws_cloudformation_stack_resource as r
join
  aws_cloudformation_stack as s
on
  r.stack_id = s.id
where
  s.disable_rollback = 1;

List resources having termination protection disabled

Determine the areas in which resources could be at risk due to disabled termination protection. This is useful for identifying potential vulnerabilities within your CloudFormation stacks.

select
  s.name,
  s.enable_termination_protection,
  s.disable_rollback,
  r.logical_resource_id,
  r.resource_status
from
  aws_cloudformation_stack_resource as r,
  aws_cloudformation_stack as s
where
  r.stack_id = s.id
  and not enable_termination_protection;
select
  s.name,
  s.enable_termination_protection,
  s.disable_rollback,
  r.logical_resource_id,
  r.resource_status
from
  aws_cloudformation_stack_resource as r
join
  aws_cloudformation_stack as s
on
  r.stack_id = s.id
where
  not s.enable_termination_protection;

List stack resources of type VPC

Discover the segments that are utilizing Virtual Private Cloud (VPC) resources within your AWS CloudFormation stacks. This is useful for understanding your resource allocation and identifying any potential areas of optimization.

select
  stack_name,
  stack_id,
  logical_resource_id,
  resource_status,
  resource_type
from
  aws_cloudformation_stack_resource
where
  resource_type = 'AWS::EC2::VPC';
select
  stack_name,
  stack_id,
  logical_resource_id,
  resource_status,
  resource_type
from
  aws_cloudformation_stack_resource
where
  resource_type = 'AWS::EC2::VPC';

List resources that failed to update

Identify instances where updates to cloud resources failed. This can help in troubleshooting and rectifying issues to ensure smooth operation of your cloud infrastructure.

select
  stack_name,
  logical_resource_id,
  resource_status,
  resource_type
from
  aws_cloudformation_stack_resource
where
  resource_status = 'UPDATE_FAILED';
select
  stack_name,
  logical_resource_id,
  resource_status,
  resource_type
from
  aws_cloudformation_stack_resource
where
  resource_status = 'UPDATE_FAILED';
title description
Steampipe Table: aws_cloudformation_stack_set - Query AWS CloudFormation StackSets using SQL
Allows users to query AWS CloudFormation StackSets, providing detailed information about each StackSet's configuration, status, and associated AWS resources.

Table: aws_cloudformation_stack_set - Query AWS CloudFormation StackSets using SQL

The AWS CloudFormation StackSets is a feature within the AWS CloudFormation service that allows you to create, update, or delete stacks across multiple accounts and regions with a single AWS CloudFormation template. StackSets takes care of the underlying details of orchestrating stack operations across multiple accounts and regions, ensuring that the stacks are created, updated, or deleted in a specified order. This simplifies the management of AWS resources and enables the easy deployment of regional and global applications.

Table Usage Guide

The aws_cloudformation_stack_set table in Steampipe provides you with information about StackSets within AWS CloudFormation. This table allows you, as a DevOps engineer, to query StackSet-specific details, including its configuration, status, and AWS resources associated with it. You can utilize this table to gather insights on StackSets, such as StackSets with specific configurations, their current status, and more. The schema outlines the various attributes of the StackSet for you, including the StackSet ID, description, status, template body, and associated tags.

Examples

Basic info

Explore which AWS CloudFormation stack sets are in use and their current status. This can be useful for auditing purposes, understanding your resource utilization, and identifying any potential issues with your stacks.

select
  stack_set_id,
  stack_set_name,
  status,
  arn,
  description
from
  aws_cloudformation_stack_set;
select
  stack_set_id,
  stack_set_name,
  status,
  arn,
  description
from
  aws_cloudformation_stack_set;

List active stack sets

Determine the areas in which active stack sets are being used within your AWS CloudFormation service. This allows you to monitor and manage your active resources effectively.

select
  stack_set_id,
  stack_set_name,
  status,
  permission_model,
  auto_deployment
from
  aws_cloudformation_stack_set
where
  status = 'ACTIVE';
select
  stack_set_id,
  stack_set_name,
  status,
  permission_model,
  auto_deployment
from
  aws_cloudformation_stack_set
where
  status = 'ACTIVE';

Get parameter details of stack sets

This query allows you to delve into the specifics of your stack sets within AWS CloudFormation. It's particularly valuable for understanding the parameters associated with each stack set, which can help in managing and optimizing your cloud resources.

select
  stack_set_name,
  stack_set_id,
  p ->> 'ParameterKey' as parameter_key,
  p ->> 'ParameterValue' as parameter_value,
  p ->> 'ResolvedValue' as resolved_value,
  p ->> 'UsePreviousValue' as use_previous_value
from
  aws_cloudformation_stack_set,
  jsonb_array_elements(parameters) as p;
select
  stack_set_name,
  stack_set_id,
  json_extract(p.value, '$.ParameterKey') as parameter_key,
  json_extract(p.value, '$.ParameterValue') as parameter_value,
  json_extract(p.value, '$.ResolvedValue') as resolved_value,
  json_extract(p.value, '$.UsePreviousValue') as use_previous_value
from
  aws_cloudformation_stack_set,
  json_each(parameters) as p;

Get drift detection details of stack sets

Explore the drift detection status of your stack sets to identify any potential issues or discrepancies. This can help in maintaining the overall health and integrity of your stack sets.

select
  stack_set_name,
  stack_set_id,
  stack_set_drift_detection_details ->> 'DriftDetectionStatus' as drift_detection_status,
  stack_set_drift_detection_details ->> 'DriftStatus' as drift_status,
  stack_set_drift_detection_details ->> 'DriftedStackInstancesCount' as drifted_stack_instances_count,
  stack_set_drift_detection_details ->> 'FailedStackInstancesCount' as failed_stack_instances_count,
  stack_set_drift_detection_details ->> 'InProgressStackInstancesCount' as in_progress_stack_instances_count,
  stack_set_drift_detection_details ->> 'InSyncStackInstancesCount' as in_sync_stack_instances_count,
  stack_set_drift_detection_details ->> 'LastDriftCheckTimestamp' as last_drift_check_timestamp,
  stack_set_drift_detection_details ->> 'TotalStackInstancesCount' as total_stack_instances_count
from
  aws_cloudformation_stack_set;
select
  stack_set_name,
  stack_set_id,
  json_extract(stack_set_drift_detection_details, '$.DriftDetectionStatus') as drift_detection_status,
  json_extract(stack_set_drift_detection_details, '$.DriftStatus') as drift_status,
  json_extract(stack_set_drift_detection_details, '$.DriftedStackInstancesCount') as drifted_stack_instances_count,
  json_extract(stack_set_drift_detection_details, '$.FailedStackInstancesCount') as failed_stack_instances_count,
  json_extract(stack_set_drift_detection_details, '$.InProgressStackInstancesCount') as in_progress_stack_instances_count,
  json_extract(stack_set_drift_detection_details, '$.InSyncStackInstancesCount') as in_sync_stack_instances_count,
  json_extract(stack_set_drift_detection_details, '$.LastDriftCheckTimestamp') as last_drift_check_timestamp,
  json_extract(stack_set_drift_detection_details, '$.TotalStackInstancesCount') as total_stack_instances_count
from
  aws_cloudformation_stack_set;
title description
Steampipe Table: aws_cloudfront_cache_policy - Query AWS CloudFront Cache Policies using SQL
Allows users to query AWS CloudFront Cache Policies for details about their configuration, status, and associated metadata.

Table: aws_cloudfront_cache_policy - Query AWS CloudFront Cache Policies using SQL

The AWS CloudFront Cache Policy is a feature of AWS CloudFront that allows you to specify detailed cache behaviors, including how, when, and where CloudFront caches and delivers content. It provides control over the data that CloudFront uses to serve requests, including headers, cookies, and query strings. This policy aids in optimizing the cache key and improving the cache hit ratio, thereby enhancing the performance of your application.

Table Usage Guide

The aws_cloudfront_cache_policy table in Steampipe provides you with information about Cache Policies within AWS CloudFront. This table allows you, as a DevOps engineer, to query policy-specific details, including the configuration, status, and associated metadata. You can utilize this table to gather insights on cache policies, such as their identifiers, comment descriptions, the default time to live (TTL), maximum and minimum TTL, and more. The schema outlines the various attributes of the cache policy for you, including the policy ARN, creation time, last modified time, and associated tags.

Examples

Basic info

Explore which AWS CloudFront cache policies are in place to understand their impact on content delivery and caching strategies. This can be beneficial in optimizing resource usage and reducing costs.

select
  id,
  name,
  comment,
  min_ttl,
  etag,
  last_modified_time
from
  aws_cloudfront_cache_policy;
select
  id,
  name,
  comment,
  min_ttl,
  etag,
  last_modified_time
from
  aws_cloudfront_cache_policy;

List cache policies where Gzip compression format is not enabled

Identify instances where Gzip compression format is not enabled in AWS CloudFront cache policies. This can help to optimize content delivery and improve website loading speeds.

select
  id,
  name,
  parameters_in_cache_key_and_forwarded_to_origin ->> 'EnableAcceptEncodingGzip' as enable_gzip
from
  aws_cloudfront_cache_policy
where
  parameters_in_cache_key_and_forwarded_to_origin ->> 'EnableAcceptEncodingGzip' <> 'true';
select
  id,
  name,
  json_extract(parameters_in_cache_key_and_forwarded_to_origin, '$.EnableAcceptEncodingGzip') as enable_gzip
from
  aws_cloudfront_cache_policy
where
  json_extract(parameters_in_cache_key_and_forwarded_to_origin, '$.EnableAcceptEncodingGzip') <> 'true';

List cache policies where Brotli compression format is not enabled

Identify instances where Brotli compression format is not enabled in cache policies. This could help improve website performance by enabling more efficient data compression.

select
  id,
  name,
  parameters_in_cache_key_and_forwarded_to_origin ->> 'EnableAcceptEncodingBrotli' as enable_brotli
from
  aws_cloudfront_cache_policy
where
  parameters_in_cache_key_and_forwarded_to_origin ->> 'EnableAcceptEncodingBrotli' <> 'true';
select
  id,
  name,
  json_extract(parameters_in_cache_key_and_forwarded_to_origin, '$.EnableAcceptEncodingBrotli') as enable_brotli
from
  aws_cloudfront_cache_policy
where
  json_extract(parameters_in_cache_key_and_forwarded_to_origin, '$.EnableAcceptEncodingBrotli') <> 'true';
title description
Steampipe Table: aws_cloudfront_distribution - Query AWS CloudFront Distributions using SQL
Allows users to query AWS CloudFront Distributions to gain insights into their configuration, status, and associated metadata.

Table: aws_cloudfront_distribution - Query AWS CloudFront Distributions using SQL

The AWS CloudFront Distributions is a part of Amazon's content delivery network (CDN) services. It speeds up the distribution of your static and dynamic web content, such as .html, .css, .js, and image files, to your users. CloudFront delivers your content through a worldwide network of data centers called edge locations and ensures that end-user requests are served by the closest edge location.

Table Usage Guide

The aws_cloudfront_distribution table in Steampipe provides you with information about distributions within AWS CloudFront. This table allows you, as a DevOps engineer, to query distribution-specific details, including distribution configuration, status, and associated metadata. You can utilize this table to gather insights on distributions, such as viewing all distributions, checking if logging is enabled, verifying if a distribution is configured to use a custom SSL certificate, and more. The schema outlines the various attributes of the CloudFront distribution for you, including the ARN, domain name, status, and associated tags.

Examples

Basic info

Analyze the settings of your AWS Cloudfront distributions to understand their current status and configuration. This can help you to identify potential issues or areas for improvement, such as outdated HTTP versions or disabled IPv6.

select
  id,
  arn,
  status,
  domain_name,
  enabled,
  e_tag,
  http_version,
  is_ipv6_enabled
from
  aws_cloudfront_distribution;
select
  id,
  arn,
  status,
  domain_name,
  enabled,
  e_tag,
  http_version,
  is_ipv6_enabled
from
  aws_cloudfront_distribution;

List distributions with logging disabled

Determine the areas in your AWS Cloudfront distribution settings where logging is disabled. This is useful for identifying potential gaps in your logging strategy, which could impact security and troubleshooting capabilities.

select
  id,
  logging ->> 'Bucket' as bucket,
  logging ->> 'Enabled' as logging_enabled,
  logging ->> 'IncludeCookies' as include_cookies
from
  aws_cloudfront_distribution
where
  logging ->> 'Enabled' = 'false';
select
  id,
  json_extract(logging, '$.Bucket') as bucket,
  json_extract(logging, '$.Enabled') as logging_enabled,
  json_extract(logging, '$.IncludeCookies') as include_cookies
from
  aws_cloudfront_distribution
where
  json_extract(logging, '$.Enabled') = 'false';

List distributions with IPv6 DNS requests not enabled

Identify instances where IPv6 DNS requests are not enabled within your AWS CloudFront distributions. This can help in improving network performance and future-proofing your system as IPv6 becomes more prevalent.

select
  id,
  arn,
  status,
  is_ipv6_enabled
from
  aws_cloudfront_distribution
where
  is_ipv6_enabled = 'false';
select
  id,
  arn,
  status,
  is_ipv6_enabled
from
  aws_cloudfront_distribution
where
  is_ipv6_enabled = 'false';

List distributions that enforce field-level encryption

Determine the areas in which field-level encryption is enforced within your distributions. This can be handy for improving security by ensuring sensitive data fields are encrypted.

select
  id,
  arn,
  default_cache_behavior ->> 'FieldLevelEncryptionId' as field_level_encryption_id,
  default_cache_behavior ->> 'DefaultTTL' as default_ttl
from
  aws_cloudfront_distribution
where
  default_cache_behavior ->> 'FieldLevelEncryptionId' <> '';
select
  id,
  arn,
  json_extract(default_cache_behavior, '$.FieldLevelEncryptionId') as field_level_encryption_id,
  json_extract(default_cache_behavior, '$.DefaultTTL') as default_ttl
from
  aws_cloudfront_distribution
where
  json_extract(default_cache_behavior, '$.FieldLevelEncryptionId') <> '';

List distributions whose origins use encrypted traffic

Determine the areas in which your AWS Cloudfront distributions are utilizing encrypted traffic. This can be beneficial to ensure data security and compliance with industry standards and regulations.

select
  id,
  arn,
  p -> 'CustomOriginConfig' -> 'HTTPPort' as http_port,
  p -> 'CustomOriginConfig' -> 'HTTPSPort' as https_port,
  p -> 'CustomOriginConfig' -> 'OriginKeepaliveTimeout' as origin_keepalive_timeout,
  p -> 'CustomOriginConfig' -> 'OriginProtocolPolicy' as origin_protocol_policy
from
  aws_cloudfront_distribution,
  jsonb_array_elements(origins) as p
where
  p -> 'CustomOriginConfig' ->> 'OriginProtocolPolicy' = 'https-only';
select
  'id',
  arn,
  json_extract(p.value, '$.CustomOriginConfig.HTTPPort') as http_port,
  json_extract(p.value, '$.CustomOriginConfig.HTTPSPort') as https_port,
  json_extract(p.value, '$.CustomOriginConfig.OriginKeepaliveTimeout') as origin_keepalive_timeout,
  json_extract(p.value, '$.CustomOriginConfig.OriginProtocolPolicy') as origin_protocol_policy
from
  aws_cloudfront_distribution,
  json_each(origins) as p
where
  json_extract(p.value, '$.CustomOriginConfig.OriginProtocolPolicy') = 'https-only';

List distributions whose origins use insecure SSL protocols

Discover the segments of your Cloudfront distributions where origins are using insecure SSL protocols. This is useful for identifying potential security vulnerabilities in your network.

select
  id,
  arn,
  p -> 'CustomOriginConfig' -> 'OriginSslProtocols' -> 'Items' as items,
  p -> 'CustomOriginConfig' -> 'OriginSslProtocols' -> 'Quantity' as quantity
from
  aws_cloudfront_distribution,
  jsonb_array_elements(origins) as p
where
  p -> 'CustomOriginConfig' -> 'OriginSslProtocols' -> 'Items' ?& array['SSLv3'];
select
  'id',
  arn,
  json_extract(p.value, '$.CustomOriginConfig.OriginSslProtocols.Items') as items,
  json_extract(p.value, '$.CustomOriginConfig.OriginSslProtocols.Quantity') as quantity
from
  aws_cloudfront_distribution,
  json_each(origins) as p
where
  json_extract(p.value, '$.CustomOriginConfig.OriginSslProtocols.Items') LIKE '%SSLv3%';
title description
Steampipe Table: aws_cloudfront_function - Query AWS CloudFront Functions using SQL
Allows users to query AWS CloudFront Functions to retrieve detailed information about each function, including its ARN, stage, status, and more.

Table: aws_cloudfront_function - Query AWS CloudFront Functions using SQL

The AWS CloudFront Function is a feature of Amazon CloudFront that allows you to write lightweight functions in JavaScript for high-scale, latency-sensitive CDN customizations. These functions execute at the edge locations, closer to the viewer, allowing you to manipulate HTTP request and response headers, URL, and methods. This feature helps in delivering a highly personalized content with low latency to your viewers.

Table Usage Guide

The aws_cloudfront_function table in Steampipe provides you with information about functions within AWS CloudFront. This table allows you, as a DevOps engineer, to query function-specific details, including the function's ARN, stage, status, and associated metadata. You can utilize this table to gather insights on functions, such as their status, the events they are associated with, and more. The schema outlines the various attributes of the CloudFront function for you, including the function ARN, creation timestamp, last modified timestamp, and associated tags.

Examples

Basic info

select
  name,
  status,
  arn,
  e_tag,
  function_config
from
  aws_cloudfront_function;
select
  name,
  status,
  arn,
  e_tag,
  function_config
from
  aws_cloudfront_function;

List details of all functions deployed to the live stage

select
  name,
  function_config ->> 'Comment' as comment,
  arn,
  status,
  e_tag
from
  aws_cloudfront_function
where
  function_metadata ->> 'Stage' = 'LIVE';
select
  name,
  json_extract(function_config, '$.Comment') as comment,
  arn,
  status,
  e_tag
from
  aws_cloudfront_function
where
  json_extract(function_metadata, '$.Stage') = 'LIVE';

List functions ordered by its creation time starting with latest first

select
  name,
  arn,
  function_metadata ->> 'Stage' as stage,
  status,
  function_metadata ->> 'CreatedTime' as created_time,
  function_metadata ->> 'LastModifiedTime' as last_modified_time
 from
  aws_cloudfront_function
order by
  function_metadata ->> 'CreatedTime' DESC;
select
  name,
  arn,
  json_extract(function_metadata, '$.Stage') as stage,
  status,
  json_extract(function_metadata, '$.CreatedTime') as created_time,
  json_extract(function_metadata, '$.LastModifiedTime') as last_modified_time
from
  aws_cloudfront_function
order by
  json_extract(function_metadata, '$.CreatedTime') DESC;

List functions updated in the last hour with latest first

select
  name,
  arn,
  function_metadata ->> 'Stage' as stage,
  status,
  function_metadata ->> 'LastModifiedTime' as last_modified_time
from
  aws_cloudfront_function
where
  (function_metadata ->> 'LastModifiedTime')::timestamp >= (now() - interval '1' hour)
order by
  function_metadata ->> 'LastModifiedTime' DESC;
select
  name,
  arn,
  json_extract(function_metadata, '$.Stage') as stage,
  status,
  json_extract(function_metadata, '$.LastModifiedTime') as last_modified_time
from
  aws_cloudfront_function
where
  datetime(json_extract(function_metadata, '$.LastModifiedTime')) >= datetime('now', '-1 hour')
order by
  json_extract(function_metadata, '$.LastModifiedTime') DESC;
title description
Steampipe Table: aws_cloudfront_origin_access_identity - Query AWS CloudFront Origin Access Identity using SQL
Allows users to query AWS CloudFront Origin Access Identity to fetch detailed information about each identity, including its ID, S3 canonical user ID, caller reference, and associated comment.

Table: aws_cloudfront_origin_access_identity - Query AWS CloudFront Origin Access Identity using SQL

The AWS CloudFront Origin Access Identity is a special CloudFront feature that allows secure access to your content within an Amazon S3 bucket. It's used as a virtual identity to enable sharing of your content with CloudFront while restricting access directly to your S3 bucket. Thus, it helps in maintaining the privacy of your data by preventing direct access to S3 resources.

Table Usage Guide

The aws_cloudfront_origin_access_identity table in Steampipe provides you with information about each origin access identity within AWS CloudFront. This table allows you, as a DevOps engineer, to query identity-specific details, including the identity's ID, S3 canonical user ID, caller reference, and associated comment. You can utilize this table to gather insights on origin access identities, such as the identity's configuration and CloudFront caller reference. The schema outlines the various attributes of the origin access identity for you, including the ID, S3 canonical user ID, caller reference, and comment.

Examples

Basic Info

Explore the foundational details of your AWS Cloudfront origin access identities to better understand your system's configuration and identify any potential areas for optimization or troubleshooting. This query is particularly useful for gaining insights into the identities' associated comments, user IDs, and unique identifiers, which can assist in system management and auditing tasks.

select
  id,
  arn,
  comment,
  s3_canonical_user_id,
  etag
from
  aws_cloudfront_origin_access_identity;
select
  id,
  arn,
  comment,
  s3_canonical_user_id,
  etag
from
  aws_cloudfront_origin_access_identity;

List origin access identity with comments

Discover the segments that have comments associated with their origin access identity in AWS Cloudfront. This is useful for understanding which identities have additional information or instructions provided, aiding in better resource management.

select
  id,
  arn,
  comment,
  caller_reference
from
  aws_cloudfront_origin_access_identity
where
  comment <> '';
select
  id,
  arn,
  comment,
  caller_reference
from
  aws_cloudfront_origin_access_identity
where
  comment != '';
title description
Steampipe Table: aws_cloudfront_origin_request_policy - Query AWS CloudFront Origin Request Policies using SQL
Allows users to query AWS CloudFront Origin Request Policies, providing details about each policy such as ID, name, comment, cookies configuration, headers configuration, query strings configuration, and more.

Table: aws_cloudfront_origin_request_policy - Query AWS CloudFront Origin Request Policies using SQL

The AWS CloudFront Origin Request Policy is a feature of Amazon CloudFront, a content delivery network service. It allows you to control how much information about the viewer's request is forwarded to the origin. This includes headers, cookies, and URL query strings, enabling you to customize the content returned by your origin based on the values in the request.

Table Usage Guide

The aws_cloudfront_origin_request_policy table in Steampipe provides you with information about Origin Request Policies within AWS CloudFront. This table allows you, as a DevOps engineer, to query policy-specific details, including ID, name, comment, cookies configuration, headers configuration, query strings configuration, and more. You can utilize this table to gather insights on policies, such as policy configurations and associated metadata. The schema outlines the various attributes of the Origin Request Policy for you, including the policy ID, creation date, last modified date, and associated tags.

Examples

Basic info

Explore which AWS Cloudfront origin request policies have been modified recently, gaining insights into potential changes and updates. This can be useful for maintaining security compliance and ensuring correct configuration.

select
  name,
  id,
  comment,
  etag,
  last_modified_time
from
  aws_cloudfront_origin_request_policy;
select
  name,
  id,
  comment,
  etag,
  last_modified_time
from
  aws_cloudfront_origin_request_policy;

Get details of HTTP headers associated with each origin request policy

Determine the characteristics of HTTP headers related to each origin request policy. This can be useful to understand how your CloudFront distributions are configured, which can help in optimizing your web content delivery and troubleshooting issues.

select
  name,
  id,
  headers_config ->> 'HeaderBehavior' as header_behavior,
  headers_config ->> 'Headers' as headers
from
  aws_cloudfront_origin_request_policy;
select
  name,
  id,
  json_extract(headers_config, '$.HeaderBehavior') as header_behavior,
  json_extract(headers_config, '$.Headers') as headers
from
  aws_cloudfront_origin_request_policy;
title description
Steampipe Table: aws_cloudfront_response_headers_policy - Query AWS CloudFront Response Headers Policy using SQL
Allows users to query AWS CloudFront Response Headers Policies, providing information about the policy configurations that determine the headers CloudFront includes in HTTP responses.

Table: aws_cloudfront_response_headers_policy - Query AWS CloudFront Response Headers Policy using SQL

The AWS CloudFront Response Headers Policy is a feature within AWS CloudFront that allows you to manage and customize the HTTP headers returned in the response from your CloudFront distributions. This can be used to enhance the security of your application, improve the caching efficiency, or to provide additional information to the clients. With this policy, you can add, remove, or modify the values of HTTP header fields, providing you with greater control over your content delivery.

Table Usage Guide

The aws_cloudfront_response_headers_policy table in Steampipe provides you with information about the Response Headers Policies within AWS CloudFront. This table allows you, as a DevOps engineer, to query policy-specific details, including policy ID, name, header behavior, and associated custom headers. You can utilize this table to gather insights on policies, such as custom header configurations, header behavior settings, and more. The schema outlines the various attributes of the Response Headers Policy for you, including the policy ARN, creation time, last modified time, and associated tags.

Important Notes

  • This table supports the optional quals type.
  • Queries with optional quals are optimised to use additional filtering provided by the AWS API function.

Examples

Basic info

Discover the segments that have been recently modified in your AWS Cloudfront response headers policy. This can be useful for assessing the elements within the policy including their names, IDs, and descriptions, and understanding any changes or updates that have been made.

select
  name,
  id,
  response_headers_policy_config ->> 'Comment' as description,
  type,
  last_modified_time
from
  aws_cloudfront_response_headers_policy;
select
  name,
  id,
  json_extract(response_headers_policy_config, '$.Comment') as description,
  type,
  last_modified_time
from
  aws_cloudfront_response_headers_policy;

List user created response header policies only

Determine the areas in which user-created response header policies exist within the AWS Cloudfront service. This query is beneficial for understanding the custom configurations that have been implemented, along with their last modification time.

select
  name,
  id,
  response_headers_policy_config ->> 'Comment' as description,
  type,
  last_modified_time
from
  aws_cloudfront_response_headers_policy
where
  type = 'custom';
select
  name,
  id,
  json_extract(response_headers_policy_config, '$.Comment') as description,
  type,
  last_modified_time
from
  aws_cloudfront_response_headers_policy
where
  type = 'custom';

List response header policies that were modified in the last hour

Determine the areas in which response header policies have been recently updated within the last hour. This is useful to track changes and maintain the security and efficiency of your AWS Cloudfront configurations.

select
  name,
  id,
  last_modified_time
from
  aws_cloudfront_response_headers_policy
where
  last_modified_time >= (now() - interval '1' hour)
order by
  last_modified_time DESC;
select
  name,
  id,
  last_modified_time
from
  aws_cloudfront_response_headers_policy
where
  last_modified_time >= (datetime('now','-1 hours'))
order by
  last_modified_time DESC;
title description
Steampipe Table: aws_cloudsearch_domain - Query AWS CloudSearch Domain using SQL
Allows users to query AWS CloudSearch Domain to retrieve detailed information about each search domain configured within an AWS account.

Table: aws_cloudsearch_domain - Query AWS CloudSearch Domain using SQL

The AWS CloudSearch Domain is a component of AWS CloudSearch, a fully-managed service that makes it easy to set up, manage, and scale a search solution for your website or application. AWS CloudSearch features include indexing of data, running search queries, and updating the search index. It provides a high level of flexibility and scalability, allowing you to search large collections of data efficiently.

Table Usage Guide

The aws_cloudsearch_domain table in Steampipe provides you with information about each search domain configured within your AWS account. This table allows you, as a DevOps engineer, data analyst, or other technical professional, to query domain-specific details, including the domain's ARN, creation date, domain ID, and associated metadata. You can utilize this table to gather insights on domains, such as the status, endpoint, and whether the domain requires signing. The schema outlines the various attributes of the CloudSearch domain for you, including the domain ARN, creation date, document count, and associated tags.

Examples

Basic info

Explore the basic information about your AWS CloudSearch domains, such as when they were created and the type and count of search instances. This can help manage resources and assess the capacity and usage of your search domains.

select
  domain_name,
  domain_id,
  arn,
  created,
  search_instance_type,
  search_instance_count
from
  aws_cloudsearch_domain;
select
  domain_name,
  domain_id,
  arn,
  created,
  search_instance_type,
  search_instance_count
from
  aws_cloudsearch_domain;

List domains by instance type

Identify instances where specific domains are linked to a certain type of search instance. This can be useful to understand the spread and usage of different search instances across your domains.

select
  domain_name,
  domain_id,
  arn,
  created,
  search_instance_type
from
  aws_cloudsearch_domain
where
  search_instance_type = 'search.small';
select
  domain_name,
  domain_id,
  arn,
  created,
  search_instance_type
from
  aws_cloudsearch_domain
where
  search_instance_type = 'search.small';

Get limit details for each domain

Explore the limits set for each domain in your AWS CloudSearch to understand how it may impact the performance and availability of your search service. This can help in optimizing the search service configuration for better resource management.

select
  domain_name,
  domain_id,
  search_service ->> 'Endpoint' as search_service_endpoint,
  limits ->> 'MaximumPartitionCount' as maximum_partition_count,
  limits ->> 'MaximumReplicationCount' as maximum_replication_count
from
  aws_cloudsearch_domain;
select
  domain_name,
  domain_id,
  json_extract(search_service, '$.Endpoint') as search_service_endpoint,
  json_extract(limits, '$.MaximumPartitionCount') as maximum_partition_count,
  json_extract(limits, '$.MaximumReplicationCount') as maximum_replication_count
from
  aws_cloudsearch_domain;
title description
Steampipe Table: aws_cloudtrail_channel - Query AWS CloudTrail Channel using SQL
Allows users to query AWS CloudTrail Channel data, including trail configurations, status, and associated metadata.

Table: aws_cloudtrail_channel - Query AWS CloudTrail Channel using SQL

The AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It helps you to log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. The CloudTrail Channel specifically, allows you to manage the delivery of CloudTrail event log files to your specified S3 bucket and CloudWatch Logs log group.

Table Usage Guide

The aws_cloudtrail_channel table in Steampipe provides you with information about CloudTrail trails within AWS CloudTrail. This table allows you, as a DevOps engineer, to query trail-specific details, including trail configurations, status, and associated metadata. You can utilize this table to gather insights on trails, such as their status, S3 bucket details, encryption status, and more. The schema outlines the various attributes of the CloudTrail trail for you, including the trail ARN, home region, S3 bucket name, and whether log file validation is enabled.

Examples

Basic info

Analyze the settings of your AWS CloudTrail channels to understand whether they are applied to all regions. This is beneficial to ensure consistent logging and monitoring across your entire AWS environment.

select
  name,
  arn,
  source,
  apply_to_all_regions
from
  aws_cloudtrail_channel;
select
  name,
  arn,
  source,
  apply_to_all_regions
from
  aws_cloudtrail_channel;

List channels that are not applied to all regions

Identify the AWS Cloudtrail channels which are not configured to apply to all regions. This can be useful for auditing regional compliance or identifying potential gaps in log coverage.

select
  name,
  arn,
  source,
  apply_to_all_regions,
  advanced_event_selectors
from
  aws_cloudtrail_channel
where
  not apply_to_all_regions;
select
  name,
  arn,
  source,
  apply_to_all_regions,
  advanced_event_selectors
from
  aws_cloudtrail_channel
where
  apply_to_all_regions = 0;

Get advanced event selector details of each channel

Determine the specific event selector details associated with each AWS CloudTrail channel. This query is useful for analyzing channel configurations and identifying any potential areas for optimization or troubleshooting.

select
  name,
  a ->> 'Name' as advanced_event_selector_name,
  a ->> 'FieldSelectors' as field_selectors
from
  aws_cloudtrail_channel,
  jsonb_array_elements(advanced_event_selectors) as a;
select
  name,
  json_extract(a.value, '$.Name') as advanced_event_selector_name,
  json_extract(a.value, '$.FieldSelectors') as field_selectors
from
  aws_cloudtrail_channel,
  json_each(advanced_event_selectors) as a;
title description
Steampipe Table: aws_cloudtrail_event_data_store - Query AWS CloudTrail Event Data using SQL
Allows users to query AWS CloudTrail Event Data, providing information about API activity in AWS accounts. This includes details about API calls, logins, and other events captured by AWS CloudTrail.

Table: aws_cloudtrail_event_data_store - Query AWS CloudTrail Event Data using SQL

The AWS CloudTrail Event Data is an AWS service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It allows you to log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. The service provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.

Table Usage Guide

The aws_cloudtrail_event_data_store table in Steampipe provides you with information about API activity in your AWS accounts. This includes details about your API calls, logins, and other events captured by AWS CloudTrail. This table allows you, as a DevOps engineer, to query event-specific details, including event names, event sources, and related metadata. You can utilize this table to gather insights on API activity, such as identifying unusual API calls, tracking login activity, and monitoring changes to your AWS resources. The schema outlines the various attributes of the CloudTrail event for you, including the event ID, event time, event name, and user identity.

Examples

Basic info

Explore the status and configuration of your AWS CloudTrail event data stores, including when they were created and their current settings. This can help you maintain security and compliance by ensuring features like multi-region access, organization-wide access, and termination protection are enabled as needed.

select
  name,
  arn,
  status,
  created_timestamp,
  multi_region_enabled,
  organization_enabled,
  termination_protection_enabled
from
  aws_cloudtrail_event_data_store;
select
  name,
  arn,
  status,
  created_timestamp,
  multi_region_enabled,
  organization_enabled,
  termination_protection_enabled
from
  aws_cloudtrail_event_data_store;

List event data stores which are not enabled

Identify instances where event data stores in the AWS CloudTrail service are not enabled. This query is useful in pinpointing potential security vulnerabilities or areas in your system that may not be properly logging and storing event data.

select
  name,
  arn,
  status,
  created_timestamp,
  multi_region_enabled,
  organization_enabled,
  termination_protection_enabled
from
  aws_cloudtrail_event_data_store
where
  status <> 'ENABLED';
select
  name,
  arn,
  status,
  created_timestamp,
  multi_region_enabled,
  organization_enabled,
  termination_protection_enabled
from
  aws_cloudtrail_event_data_store
where
  status != 'ENABLED';

List event data stores with termination protection disabled

Determine the areas in which event data stores have termination protection disabled in your AWS CloudTrail. This is useful to identify potential vulnerabilities and ensure data safety.

select
  name,
  arn,
  status,
  created_timestamp,
  multi_region_enabled,
  organization_enabled,
  termination_protection_enabled
from
  aws_cloudtrail_event_data_store
where
  not termination_protection_enabled;
select
  name,
  arn,
  status,
  created_timestamp,
  multi_region_enabled,
  organization_enabled,
  termination_protection_enabled
from
  aws_cloudtrail_event_data_store
where
  termination_protection_enabled = 0;
title description
Steampipe Table: aws_cloudtrail_import - Query AWS CloudTrail using SQL
Allows users to query AWS CloudTrail imports to extract data about imported trail files such as the file name, import time, hash value, and more.

Table: aws_cloudtrail_import - Query AWS CloudTrail using SQL

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It allows you to log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.

Table Usage Guide

The aws_cloudtrail_import table in Steampipe provides you with information about imported trail files within AWS CloudTrail. This table allows you, as a DevOps engineer, to query import-specific details, including the file name, import time, hash value, and more. You can utilize this table to gather insights on imported trail files, such as their import status, hash type, and hash value. The schema outlines the various attributes of the imported trail file for you, including the import ID, import time, file name, and associated metadata.

Examples

Basic info

Explore which AWS CloudTrail imports have been created and their current status to understand where the data is being sent. This can help in assessing the flow of data and ensuring it is reaching the intended destinations.

select
  import_id,
  created_timestamp,
  import_status,
  destinations
from
  aws_cloudtrail_import;
select
  import_id,
  created_timestamp,
  import_status,
  destinations
from
  aws_cloudtrail_import;

List imports that are not completed

Identify instances where CloudTrail imports are still in progress. This is useful for tracking the progress of data import tasks and identifying any potential issues or delays.

select
  import_id,
  created_timestamp,
  import_source
from
  aws_cloudtrail_import
where
  import_status <> 'COMPLETED';
select
  import_id,
  created_timestamp,
  import_source
from
  aws_cloudtrail_import
where
  import_status != 'COMPLETED';

List imports that are created in last 30 days

Identify recent imports within the last 30 days to track their status and duration. This is useful for understanding recent activity and ensuring timely data retrieval.

select
  import_id,
  created_timestamp,
  import_status,
  start_event_time,
  end_event_time
from
  aws_cloudtrail_import
where
  created_timestamp >= now() - interval '30' day;
select
  import_id,
  created_timestamp,
  import_status,
  start_event_time,
  end_event_time
from
  aws_cloudtrail_import
where
  created_timestamp >= datetime('now', '-30 day');

Get import source details of each import

Identify the origins of each import by examining the access role, region, and URI of the S3 bucket used. This can be useful for auditing purposes or to troubleshoot issues related to specific imports.

select
  import_id,
  import_status,
  import_source ->> 'S3BucketAccessRoleArn' as s3_bucket_access_role_arn,
  import_source ->> 'S3BucketRegion' as s3_bucket_region,
  import_source ->> 'S3LocationUri' as s3_location_uri

from
  aws_cloudtrail_import;
select
  import_id,
  import_status,
  json_extract(import_source, '$.S3BucketAccessRoleArn') as s3_bucket_access_role_arn,
  json_extract(import_source, '$.S3BucketRegion') as s3_bucket_region,
  json_extract(import_source, '$.S3LocationUri') as s3_location_uri

from
  aws_cloudtrail_import;

Get import statistic of each import

Gain insights into the performance of each import operation by assessing the number of completed events, failed entries, and completed files. This is useful for monitoring the efficiency and reliability of data import processes.

select
  import_id,
  import_status,
  import_statistics -> 'EventsCompleted' as events_completed,
  import_statistics -> 'FailedEntries' as failed_entries,
  import_statistics -> 'FilesCompleted' as files_completed,
  import_statistics -> 'FilesCompleted' as prefixes_completed,
  import_statistics -> 'PrefixesFound' as PrefixesFound
from
  aws_cloudtrail_import;
select
  import_id,
  import_status,
  json_extract(import_statistics, '$.EventsCompleted') as events_completed,
  json_extract(import_statistics, '$.FailedEntries') as failed_entries,
  json_extract(import_statistics, '$.FilesCompleted') as files_completed,
  json_extract(import_statistics, '$.FilesCompleted') as prefixes_completed,
  json_extract(import_statistics, '$.PrefixesFound') as PrefixesFound
from
  aws_cloudtrail_import;
title description
Steampipe Table: aws_cloudtrail_lookup_event - Query AWS CloudTrail Lookup Events using SQL
Allows users to query AWS CloudTrail Lookup Events, providing information about each trail event within AWS CloudTrail. The table can be used to retrieve details such as the event time, event name, resources involved, and much more.

Table: aws_cloudtrail_lookup_event - Query AWS CloudTrail Lookup Events using SQL

AWS CloudTrail Lookup Events is a feature within AWS CloudTrail, a service that provides a record of actions taken by a user, role, or an AWS service in AWS. This feature specifically allows you to look up and retrieve information about the events recorded by CloudTrail.

Table Usage Guide

The aws_cloudtrail_lookup_event table in Steampipe provides you with information about each trail event within AWS CloudTrail. This table allows you, as a DevOps engineer, to query event-specific details, including event time, event name, resources involved, and more. You can utilize this table to gather insights on trail events, such as event source, user identity, and request parameters. The schema outlines the various attributes of the CloudTrail event for you, including the event ID, event version, read only, and associated tags.

Important notes:

  • For improved performance, it is advised that you use the optional qual start_time and end_time to limit the result set to a specific time period.
  • This table supports optional quals. Queries with optional quals are optimised to use CloudWatch filters. Optional quals are supported for the following columns:
    • read_only
    • event_id
    • event_name
    • event_source
    • resource_name
    • resource_type
    • access_key_id
    • start_time
    • end_time
    • username

Examples

List events that occurred over the last five minutes

This query is useful for gaining insights into recent activity within your AWS environment. It provides a quick overview of the events that have taken place in the last five minutes, which can be particularly useful for immediate incident response or real-time monitoring.

select
  event_name,
  event_source,
  event_time,
  username,
  jsonb_pretty(cloud_trail_event) as cloud_trail_event
from
  aws_cloudtrail_lookup_event
where
  start_time = now() - interval '5 minutes'
  and end_time = now();
select
  event_name,
  event_source,
  event_time,
  username,
  json(cloud_trail_event) as cloud_trail_event
from
  aws_cloudtrail_lookup_event
where
  start_time = datetime('now', '-5 minutes')
  and end_time = datetime('now');

List all action events, i.e., not ReadOnly that occurred over the last hour

Explore which action events have occurred in the last hour on AWS Cloudtrail. This is useful for identifying recent activities that have potentially altered your system.

select
  event_name,
  event_source,
  event_time,
  username,
  jsonb_pretty(cloud_trail_event) as cloud_trail_event
from
  aws_cloudtrail_lookup_event
where
  start_time = now()
  and end_time = now() - interval '1 hour'
  and read_only = 'true'
order by
  event_time asc;
select
  event_name,
  event_source,
  event_time,
  username,
  json(cloud_trail_event) as cloud_trail_event
from
  aws_cloudtrail_lookup_event
where
  start_time = datetime('now')
  and end_time = datetime('now', '-1 hour')
  and read_only = 'true'
order by
  event_time asc;

List events for a specific service (IAM) that occurred over the last hour

This query allows users to monitor recent activity for a specific service, in this case, AWS's Identity and Access Management (IAM). It is particularly useful for security audits, as it provides a chronological overview of events, including who initiated them and what actions were taken, over the last hour.

select
  event_name,
  event_source,
  event_time,
  jsonb_pretty(cloud_trail_event) as cloud_trail_event
from
  aws_cloudtrail_lookup_event
where
  and event_source = 'iam.amazonaws.com'
  and event_time >= now() - interval '1 hour';
select
  event_name,
  event_source,
  event_time,
  json(cloud_trail_event) as cloud_trail_event
from
  aws_cloudtrail_lookup_event
where
  and event_source = 'iam.amazonaws.com'
  and event_time >= datetime('now', '-1 hour');
title description
Steampipe Table: aws_cloudtrail_query - Query AWS CloudTrail using SQL
Allows users to query AWS CloudTrail events for a detailed view of account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.

Table: aws_cloudtrail_query - Query AWS CloudTrail using SQL

The AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. It allows you to log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. With CloudTrail, you can conduct security analysis, track changes to your AWS resources, and aid in compliance reporting.

Table Usage Guide

The aws_cloudtrail_query table in Steampipe provides you with information about CloudTrail events within AWS. This table allows you, as a DevOps engineer, to query event-specific details, including the identity of the API caller, the time of the API call, the source IP address of the API caller, and the request parameters made. You can utilize this table to gather insights on account activity, such as actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. The schema outlines the various attributes of the CloudTrail event for you, including the event name, event time, event source, and associated tags.

Examples

Basic info

Gain insights into the status and efficiency of your AWS CloudTrail queries, including the number of events matched and scanned, to optimize resource usage and improve query performance. This can be particularly useful for troubleshooting and auditing purposes.

select
  query_id,
  event_data_store_arn,
  query_status,
  query_status,
  creation_time,
  events_matched,
  events_scanned
from
  aws_cloudtrail_query;
select
  query_id,
  event_data_store_arn,
  query_status,
  query_status,
  creation_time,
  events_matched,
  events_scanned
from
  aws_cloudtrail_query;

List queries that are failed

Determine the areas in which AWS CloudTrail queries have failed to gain insights into potential issues or bottlenecks within your system.

select
  query_id,
  event_data_store_arn,
  query_status,
  creation_time,
  query_string,
  execution_time_in_millis
from
  aws_cloudtrail_query
where
  query_status = 'FAILED';
select
  query_id,
  event_data_store_arn,
  query_status,
  creation_time,
  query_string,
  execution_time_in_millis
from
  aws_cloudtrail_query
where
  query_status = 'FAILED';

Get event data store details for the queries

Explore the relationship between specific queries and their corresponding event data stores in AWS CloudTrail, providing insights into the status, multi-region capability, and termination protection of these data stores.

select
  q.query_id as query_id,
  q.event_data_store_arn as event_data_store_arn,
  s.name as event_data_store_name,
  s.status as event_data_store_status,
  s.multi_region_enabled as multi_region_enabled,
  s.termination_protection_enabled as termination_protection_enabled,
  s.updated_timestamp as event_data_store_updated_timestamp
from
  aws_cloudtrail_query as q,
  aws_cloudtrail_event_data_store as s
where
 s.arn = q.event_data_store_arn;
select
  q.query_id as query_id,
  q.event_data_store_arn as event_data_store_arn,
  s.name as event_data_store_name,
  s.status as event_data_store_status,
  s.multi_region_enabled as multi_region_enabled,
  s.termination_protection_enabled as termination_protection_enabled,
  s.updated_timestamp as event_data_store_updated_timestamp
from
  aws_cloudtrail_query as q,
  aws_cloudtrail_event_data_store as s
where
 s.arn = q.event_data_store_arn;

List queries created within the last 3 days

Identify AWS CloudTrail queries that have been created within the last three days, allowing you to monitor recent query activity and understand their execution times.

select
  query_id,
  event_data_store_arn,
  query_status,
  creation_time,
  query_string,
  execution_time_in_millis
from
  aws_cloudtrail_query
where
  creation_time <= now() - interval '3' day;
select
  query_id,
  event_data_store_arn,
  query_status,
  creation_time,
  query_string,
  execution_time_in_millis
from
  aws_cloudtrail_query
where
  creation_time <= datetime('now', '-3 day');
title description
Steampipe Table: aws_cloudtrail_trail - Query AWS CloudTrail Trail using SQL
Allows users to query AWS CloudTrail Trails for information about the AWS CloudTrail service's trail records. This includes trail configuration details, status, and associated metadata.

Table: aws_cloudtrail_trail - Query AWS CloudTrail Trail using SQL

AWS CloudTrail Trail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. It provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.

Table Usage Guide

The aws_cloudtrail_trail table in Steampipe provides you with information about each trail within the AWS CloudTrail service. This table allows you, as a DevOps engineer, to query trail-specific details, including configuration settings, trail status, and associated metadata. You can utilize this table to gather insights on trails, such as CloudTrail configuration, trail status, and more. The schema outlines the various attributes of the trail for you, including the trail ARN, home region, log file validation, and associated tags.

Examples

Basic info

Explore which trails in your AWS CloudTrail service are multi-region. This can help you understand your trail configuration and manage resources effectively across different regions.

select
  name,
  home_region,
  is_multi_region_trail
from
  aws_cloudtrail_trail
select
  name,
  home_region,
  is_multi_region_trail
from
  aws_cloudtrail_trail

List trails that are not encrypted

Identify instances where trails in AWS CloudTrail are not encrypted. This can help in assessing the security posture of your AWS environment, and ensure that all trails are adequately protected.

select
  name,
  kms_key_id
from
  aws_cloudtrail_trail
where
  kms_key_id is null;
select
  name,
  kms_key_id
from
  aws_cloudtrail_trail
where
  kms_key_id is null;

List trails that store logs in publicly accessible S3 buckets

Discover the trails that are storing logs in publicly accessible S3 buckets. This is useful for identifying potential security risks associated with public access to sensitive data.

select
  trail.name as trail_name,
  bucket.name as bucket_name,
  bucket.bucket_policy_is_public as is_publicly_accessible
from
  aws_cloudtrail_trail as trail
  join aws_s3_bucket as bucket on trail.s3_bucket_name = bucket.name
where
  bucket.bucket_policy_is_public;
select
  trail.name as trail_name,
  bucket.name as bucket_name,
  bucket.bucket_policy_is_public as is_publicly_accessible
from
  aws_cloudtrail_trail as trail
  join aws_s3_bucket as bucket on trail.s3_bucket_name = bucket.name
where
  bucket.bucket_policy_is_public = 1;

List trails that store logs in an S3 bucket with versioning disabled

Determine the areas in which trails store logs in an S3 bucket with versioning disabled, allowing you to identify potential security risks and ensure data integrity.

select
  trail.name as trail_name,
  bucket.name as bucket_name,
  logging
from
  aws_cloudtrail_trail as trail
  join aws_s3_bucket as bucket on trail.s3_bucket_name = bucket.name
where
  not versioning_enabled;
select
  trail.name as trail_name,
  bucket.name as bucket_name,
  logging
from
  aws_cloudtrail_trail as trail
  join aws_s3_bucket as bucket on trail.s3_bucket_name = bucket.name
where
  versioning_enabled = 0;

List trails that do not send log events to CloudWatch Logs

Identify instances where trails in AWS CloudTrail are not actively logging events. This is useful in pinpointing potential security risks or gaps in logging policies.

select
  name,
  is_logging
from
  aws_cloudtrail_trail
where
  not is_logging;
select
  name,
  is_logging
from
  aws_cloudtrail_trail
where
  is_logging = 0;

List trails with log file validation disabled

Determine the areas in which log file validation is disabled within your AWS CloudTrail trails. This could be useful in identifying potential security risks or compliance issues.

select
  name,
  arn,
  log_file_validation_enabled
from
  aws_cloudtrail_trail
where
  not log_file_validation_enabled;
select
  name,
  arn,
  log_file_validation_enabled
from
  aws_cloudtrail_trail
where
  log_file_validation_enabled = 0;

List shadow trails

Explore which AWS CloudTrail Trails are configured to operate across multiple regions, helping you identify potential security risks or compliance issues. This query is particularly useful in pinpointing trails that are not located in their home region, assisting in efficient resource management.

select
  name,
  arn,
  region,
  home_region
from
  aws_cloudtrail_trail
where
  is_multi_region_trail
  and home_region <> region;
select
  name,
  arn,
  region,
  home_region
from
  aws_cloudtrail_trail
where
  is_multi_region_trail = 1
  and home_region != region;
title description
Steampipe Table: aws_cloudtrail_trail_event - Query AWS CloudTrail Events using SQL
Allows users to query AWS CloudTrail Events, providing information about each trail event within AWS CloudTrail. The table can be used to retrieve details such as the event time, event name, resources involved, and much more.

Table: aws_cloudtrail_trail_event - Query AWS CloudTrail Events using SQL

AWS CloudTrail Events are records of activity within your AWS environment. This service captures all API calls for your account, including calls made via the AWS Management Console, SDKs, and CLI. It provides a history of AWS API calls for your account, including API calls made via the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services.

Table Usage Guide

The aws_cloudtrail_trail_event table in Steampipe provides you with information about each trail event within AWS CloudTrail. This table allows you, as a DevOps engineer, to query event-specific details, including event time, event name, resources involved, and more. You can utilize this table to gather insights on trail events, such as event source, user identity, and request parameters. The schema outlines the various attributes of the CloudTrail event for you, including the event ID, event version, read only, and associated tags.

Important Notes

  • You must specify log_group_name in a where clause in order to use this table.
  • For improved performance, it is advised that you use the optional qual timestamp to limit the result set to a specific time period.
  • This table supports optional quals. Queries with optional quals are optimised to use CloudWatch filters. Optional quals are supported for the following columns:
    • access_key_id
    • aws_region (region of the event, useful in case of multi-region trails)
    • error_code
    • event_category
    • event_id
    • event_name
    • event_source
    • filter
    • log_stream_name
    • region
    • source_ip_address
    • timestamp
    • username

Examples

List events that occurred over the last five minutes

This query is useful for gaining insights into recent activity within your AWS environment. It provides a quick overview of the events that have taken place in the last five minutes, which can be particularly useful for immediate incident response or real-time monitoring.

select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  jsonb_pretty(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and timestamp >= now() - interval '5 minutes';
select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  json(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and timestamp >= datetime('now', '-5 minutes');

List ordered events that occurred between five to ten minutes ago

Explore the sequence of events that occurred within a specific time frame in the recent past. This can be useful for auditing activities, identifying anomalies, or tracking user behaviour within a given period.

select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  jsonb_pretty(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and timestamp between (now() - interval '10 minutes') and (now() - interval '5 minutes')
order by
  event_time asc;
select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  json(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and timestamp between (datetime('now', '-10 minutes')) and (datetime('now', '-5 minutes'))
order by
  event_time asc;

List all action events, i.e., not ReadOnly that occurred over the last hour

Explore which action events have occurred in the last hour on AWS Cloudtrail. This is useful for identifying recent activities that have potentially altered your system.

select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  jsonb_pretty(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and not read_only
  and timestamp >= now() - interval '1 hour'
order by
  event_time asc;
select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  json(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and not read_only
  and timestamp >= datetime('now', '-1 hours')
order by
  event_time asc;

List events for a specific service (IAM) that occurred over the last hour

This query allows users to monitor recent activity for a specific service, in this case, AWS's Identity and Access Management (IAM). It is particularly useful for security audits, as it provides a chronological overview of events, including who initiated them and what actions were taken, over the last hour.

select
  event_name,
  event_source,
  event_time,
  user_type,
  user_identifier,
  jsonb_pretty(request_parameters) as request_parameters,
  jsonb_pretty(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and event_source = 'iam.amazonaws.com'
  and timestamp >= now() - interval '1 hour'
order by
  event_time asc;
select
  event_name,
  event_source,
  event_time,
  user_type,
  user_identifier,
  json(request_parameters) as request_parameters,
  json(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and event_source = 'iam.amazonaws.com'
  and timestamp >= datetime('now', '-1 hour')
order by
  event_time asc;

List events for an IAM user (steampipe) that occurred over the last hour

Explore which events have occurred on your system over the past hour that are associated with a specific IAM user. This can help in monitoring user activity and identifying potential security concerns.

select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  jsonb_pretty(request_parameters) as request_parameters,
  jsonb_pretty(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and username = 'steampipe'
  and timestamp >= now() - interval '1 hour'
order by
  event_time asc;
select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  request_parameters,
  response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and username = 'steampipe'
  and timestamp >= datetime('now', '-1 hour')
order by
  event_time asc;

List events performed by IAM users that occurred over the last hour

Determine the activities undertaken by IAM users within the past hour in your AWS environment. This can help in understanding user behaviors, monitoring security, and auditing purposes.

select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  jsonb_pretty(request_parameters) as request_parameters,
  jsonb_pretty(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and user_type = 'IAMUser'
  and timestamp >= now() - interval '1 hour'
order by
  event_time asc;
select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  request_parameters,
  response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and user_type = 'IAMUser'
  and timestamp >= datetime('now','-1 hours')
order by
  event_time asc;

List events performed with an assumed role that occurred over the last hour

Explore which actions were carried out using an assumed role in the past hour. This is useful in monitoring and auditing for any unusual or unauthorized activities.

select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  jsonb_pretty(request_parameters) as request_parameters,
  jsonb_pretty(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and user_type = 'AssumedRole'
  and timestamp >= now() - interval '1 hour'
order by
  event_time asc;
select
  event_name,
  event_source,
  event_time,
  user_type,
  username,
  user_identifier,
  request_parameters,
  response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and user_type = 'AssumedRole'
  and timestamp >= datetime('now', '-1 hours')
order by
  event_time asc;

List events that were not successfully executed that occurred over the last hour

Identify instances where events were not executed successfully in the past hour. This is useful for monitoring system performance and quickly addressing any operational issues.

select
  event_name,
  event_source,
  event_time,
  error_code,
  error_message,
  user_type,
  username,
  user_identifier,
  jsonb_pretty(request_parameters) as request_parameters,
  jsonb_pretty(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and error_code is not null
  and timestamp >= now() - interval '1 hour'
order by
  event_time asc;
select
  event_name,
  event_source,
  event_time,
  error_code,
  error_message,
  user_type,
  username,
  user_identifier,
  request_parameters,
  response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and error_code is not null
  and timestamp >= datetime('now','-1 hours')
order by
  event_time asc;

Filter examples

For more information on CloudWatch log filters, please refer to Filter Pattern Syntax.

List events originating from a specific IP address range that occurred over the last hour

Explore which events have originated from a specific IP address range in the last hour. This is useful for understanding and monitoring recent activity and potential security incidents related to that IP range.

select
  event_name,
  event_source,
  event_time,
  error_code,
  error_message,
  user_type,
  username,
  user_identifier,
  jsonb_pretty(request_parameters) as request_parameters,
  jsonb_pretty(response_elements) as response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and filter = '{ $.sourceIPAddress = 203.189.* }'
  and timestamp >= now() - interval '1 hour'
order by
  event_time asc;
select
  event_name,
  event_source,
  event_time,
  error_code,
  error_message,
  user_type,
  username,
  user_identifier,
  request_parameters,
  response_elements
from
  aws_cloudtrail_trail_event
where
  log_group_name = 'aws-cloudtrail-log-group-name'
  and json_extract(filter, '$.sourceIPAddress') like '203.189.%'
  and timestamp >= datetime('now', '-1 hour')
order by
  event_time asc;
title description
Steampipe Table: aws_cloudwatch_alarm - Query AWS CloudWatch Alarms using SQL
Allows users to query AWS CloudWatch Alarms, providing detailed information about each alarm, including its configuration, state, and associated actions.

Table: aws_cloudwatch_alarm - Query AWS CloudWatch Alarms using SQL

The AWS CloudWatch Alarms is a feature of Amazon CloudWatch, a monitoring service for AWS resources and applications. CloudWatch Alarms allow you to monitor Amazon Web Services resources and trigger actions when changes in data points meet certain defined thresholds. They help you react quickly to issues that may affect your applications or infrastructure, thereby enhancing your ability to keep applications running smoothly.

Table Usage Guide

The aws_cloudwatch_alarm table in Steampipe provides you with information about alarms within AWS CloudWatch. This table allows you, as a DevOps engineer, to query alarm-specific details, including its current state, configuration, and actions associated with each alarm. You can utilize this table to gather insights on alarms, such as alarms in a particular state, alarms associated with specific AWS resources, and understanding the actions that will be triggered when an alarm state changes. The schema outlines the various attributes of the CloudWatch alarm for you, including the alarm name, alarm description, metric name, comparison operator, and associated tags.

Examples

Basic info

Explore the status and configurations of your CloudWatch alarms to understand their current operational state and the conditions that trigger them. This can help you monitor the health and performance of your AWS resources more effectively.

select
  name,
  state_value,
  metric_name,
  actions_enabled,
  comparison_operator,
  namespace,
  statistic
from
  aws_cloudwatch_alarm;
select
  name,
  state_value,
  metric_name,
  actions_enabled,
  comparison_operator,
  namespace,
  statistic
from
  aws_cloudwatch_alarm;

List alarms in alarm state

Discover the segments that are currently in an alarm state. This is useful to quickly identify and address any issues within your cloud infrastructure.

select
  name,
  arn,
  state_value,
  state_reason
from
  aws_cloudwatch_alarm
where
 state_value = 'ALARM';
select
  name,
  arn,
  state_value,
  state_reason
from
  aws_cloudwatch_alarm
where
 state_value = 'ALARM';

List alarms with alarm actions enabled

Identify instances where alarms have been activated with specific actions in the AWS CloudWatch service. This can be useful in understanding the active monitoring and alerting mechanisms in place for system events.

select
  arn,
  actions_enabled,
  alarm_actions
from
  aws_cloudwatch_alarm
where
  actions_enabled;
select
  arn,
  actions_enabled,
  alarm_actions
from
  aws_cloudwatch_alarm
where
  actions_enabled = 1;

Get the metric attached to each alarm based on a single metric

Discover the segments that have alarms set based on specific metrics within the AWS Cloudwatch service. This is particularly useful for monitoring and managing application performance, resource utilization, and operational health.

select
  name,
  metric_name,
  namespace,
  period,
  statistic,
  dimensions
from
  aws_cloudwatch_alarm
where
  metric_name is not null;
select
  name,
  metric_name,
  namespace,
  period,
  statistic,
  dimensions
from
  aws_cloudwatch_alarm
where
  metric_name is not null;

Get metrics attached to each alarm based on a metric math expression

Identify the metrics associated with each alarm based on mathematical expressions. This can help in understanding the performance of various elements and aid in proactive monitoring and troubleshooting.

select
  name,
  metric ->> 'Id' as metric_id,
  metric ->> 'Expression' as metric_expression,
  metric -> 'MetricStat' -> 'Metric' ->> 'MetricName' as metric_name,
  metric -> 'MetricStat' -> 'Metric' ->> 'Namespace' as metric_namespace,
  metric -> 'MetricStat' -> 'Metric' ->> 'Dimensions' as metric_dimensions,
  metric ->> 'ReturnData' as metric_return_data
from
  aws_cloudwatch_alarm,
  jsonb_array_elements(metrics) as metric;
select
  name,
  json_extract(metric, '$.Id') as metric_id,
  json_extract(metric, '$.Expression') as metric_expression,
  json_extract(metric, '$.MetricStat.Metric.MetricName') as metric_name,
  json_extract(metric, '$.MetricStat.Metric.Namespace') as metric_namespace,
  json_extract(metric, '$.MetricStat.Metric.Dimensions') as metric_dimensions,
  json_extract(metric, '$.ReturnData') as metric_return_data
from
  aws_cloudwatch_alarm,
  json_each(metrics) as metric;
title description
Steampipe Table: aws_cloudwatch_log_event - Query AWS CloudWatch Log Events using SQL
Allows users to query AWS CloudWatch Log Events to retrieve information about log events from a specified log group. Users can utilize this table to monitor and troubleshoot systems and applications using their existing log data.

Table: aws_cloudwatch_log_event - Query AWS CloudWatch Log Events using SQL

The AWS CloudWatch Log Events is a feature of Amazon CloudWatch that enables you to monitor, store, and access your log files from Amazon Elastic Compute Cloud (EC2) instances, AWS CloudTrail, and other sources. It allows you to centralize the logs from all your systems, applications, and AWS services that you use, in a single, highly scalable service. With CloudWatch Log Events, you can quickly search and filter your log data for specific error codes or patterns, and set alarms for specific phrases, values or patterns that appear in your log data.

Table Usage Guide

The aws_cloudwatch_log_event table in Steampipe provides you with information about Log Events within AWS CloudWatch. This table allows you, as a DevOps engineer, system administrator, or developer, to query event-specific details, including the event message, event timestamp, and associated metadata. You can utilize this table to gather insights on log events, such as event patterns, event frequency, event sources, and more. The schema outlines the various attributes of the Log Event for you, including the event ID, log group name, log stream name, and ingestion time.

Important Notes

  • You must specify log_group_name in a where clause in order to use this table.
  • For improved performance, it is advised that you use the optional qual timestamp to limit the result set to a specific time period.
  • This table supports optional quals. Queries with optional quals are optimised to use CloudWatch filters. Optional quals are supported for the following columns:
    • filter
    • log_stream_name
    • region
    • timestamp

The following tables also retrieve data from CloudWatch log groups, but have columns specific to the log type for easier querying:

Examples

List events that occurred over the last five minutes

Explore recent activity within your system by identifying events that have occurred in the past five minutes. This is particularly useful for real-time monitoring and immediate issue detection.

select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and timestamp >= now() - interval '5 minutes';
select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and timestamp >= datetime('now', '-5 minutes');

List ordered events that occurred between five to ten minutes ago

Determine the sequence of events that transpired within a specific timeframe in your AWS CloudWatch logs. This is useful for tracking activity and identifying potential issues that occurred between five to ten minutes ago.

select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and timestamp between (now() - interval '10 minutes') and (now() - interval '5 minutes')
order by
  timestamp asc;
select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and timestamp between (strftime('%s','now') - 600) and (strftime('%s','now') - 300)
order by
  timestamp asc;

Filter examples

For more information on CloudWatch log filters, please refer to Filter Pattern Syntax.

List events that match the filter pattern term eventName to a single value that occurred over the last hour

Determine the occurrences of a specific event within the last hour in your AWS CloudWatch logs. This is particularly useful for tracking and analyzing specific activities or changes over a short period of time.

select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and filter = '{$.eventName="DescribeVpcs"}'
  and timestamp >= now() - interval '1 hour';
select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and json_extract(filter, '$.eventName') = "DescribeVpcs"
  and timestamp >= datetime('now', '-1 hour');

List events that match the filter pattern term errorCode to a single value that occurred over the last hour

The query is designed to monitor and identify instances of unauthorized access or access denial within the last hour. This is particularly useful for maintaining security and troubleshooting access issues in real-time.

select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and filter = '{ ($.errorCode = "*UnauthorizedOperation") || ($.errorCode = "AccessDenied*") }'
  and timestamp >= now() - interval '1 hour';
select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and (json_extract(filter, '$.errorCode') = "*UnauthorizedOperation" or json_extract(filter, '$.errorCode') = "AccessDenied*")
  and timestamp >= datetime('now', '-1 hours');

List events that match the filter pattern term eventName to multiple values that occurred over the last hour

Explore the specific security-related events in your AWS CloudWatch logs from the past hour to gain insights into potential security changes or threats. This helps in maintaining a secure and compliant environment by tracking changes in security groups and identifying suspicious activities.

select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and filter = '{($.eventName = AuthorizeSecurityGroupIngress) || ($.eventName = AuthorizeSecurityGroupEgress) || ($.eventName = RevokeSecurityGroupIngress) || ($.eventName = RevokeSecurityGroupEgress) || ($.eventName = CreateSecurityGroup) || ($.eventName = DeleteSecurityGroup)}'
  and region = 'us-east-1'
  and timestamp >= now() - interval '1 hour';
select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and json_extract(filter, '$.eventName') in ('AuthorizeSecurityGroupIngress', 'AuthorizeSecurityGroupEgress', 'RevokeSecurityGroupIngress', 'RevokeSecurityGroupEgress', 'CreateSecurityGroup', 'DeleteSecurityGroup')
  and region = 'us-east-1'
  and timestamp >= datetime('now', '-1 hour');

List events which match a specific field in a JSON object that occurred over the past day

This query is useful for monitoring user activity within a specific time frame. Specifically, it helps identify actions taken by a 'superuser' within the last day, providing insights into their behavior and potential security implications.

select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and filter = '{$.userIdentity.sessionContext.sessionIssuer.userName="turbot_superuser"}'
  and timestamp >= now() - interval '1 day';
select
  log_group_name,
  log_stream_name,
  event_id,
  timestamp,
  ingestion_time,
  message
from
  aws_cloudwatch_log_event
where
  log_group_name = 'cloudwatch-log-event-group-name'
  and json_extract(filter, '$.userIdentity.sessionContext.sessionIssuer.userName')="turbot_superuser"
  and timestamp >= datetime('now', '-1 day');
title description
Steampipe Table: aws_cloudwatch_log_group - Query AWS CloudWatch Log Groups using SQL
Allows users to query AWS CloudWatch Log Groups and retrieve their attributes such as ARN, creation time, stored bytes, metric filter count, and more.

Table: aws_cloudwatch_log_group - Query AWS CloudWatch Log Groups using SQL

The AWS CloudWatch Log Group is a resource that encapsulates your AWS CloudWatch Logs. These log groups are used to monitor, store, and access your log events. It allows you to specify a retention period to automatically expire old log events, thus aiding in managing your log data efficiently.

Table Usage Guide

The aws_cloudwatch_log_group table in Steampipe provides you with information about Log Groups within AWS CloudWatch. This table allows you, as a DevOps engineer, to query Log Group-specific details, including the ARN, creation time, stored bytes, metric filter count, retention period, and associated tags. You can utilize this table to gather insights on Log Groups, such as their size, age, and associated metrics. The schema outlines the various attributes of the Log Group for you, including the ARN, creation time, stored bytes, and associated tags.

Examples

List all the log groups that are not encrypted

Identify instances where log groups in AWS CloudWatch are not encrypted. This is beneficial in assessing security measures and ensuring encryption is applied where necessary for data protection.

select
  name,
  kms_key_id,
  metric_filter_count,
  retention_in_days
from
  aws_cloudwatch_log_group
where
  kms_key_id is null;
select
  name,
  kms_key_id,
  metric_filter_count,
  retention_in_days
from
  aws_cloudwatch_log_group
where
  kms_key_id is null;

List of log groups whose retention period is less than 7 days

Determine the areas in your AWS Cloudwatch where log groups are set to retain data for less than a week. This query is useful for identifying potential data loss risks due to short retention periods.

select
  name,
  retention_in_days
from
  aws_cloudwatch_log_group
where
  retention_in_days < 7;
select
  name,
  retention_in_days
from
  aws_cloudwatch_log_group
where
  retention_in_days < 7;

Metric filters info attached log groups

Uncover the details of how your AWS CloudWatch log groups relate to metric filters, providing a comprehensive view of your logging and monitoring setup. This can be helpful in auditing your CloudWatch configurations, ensuring that important log data is being correctly processed and monitored.

select
  groups.name as log_group_name,
  metric.name as metric_filter_name,
  metric.filter_pattern,
  metric.metric_transformation_name,
  metric.metric_transformation_value
from
  aws_cloudwatch_log_group groups
  join aws_cloudwatch_log_metric_filter metric on groups.name = metric.log_group_name;
select
  groups.name as log_group_name,
  metric.name as metric_filter_name,
  metric.filter_pattern,
  metric.metric_transformation_name,
  metric.metric_transformation_value
from
  aws_cloudwatch_log_group as groups
  join aws_cloudwatch_log_metric_filter as metric on groups.name = metric.log_group_name;

List data protection audit policies and their destinations for each log group

Explore the configuration of your data protection audit policies to understand how and where your log data is being sent. This can be useful for ensuring that your logs are being directed to the correct destinations, making it easier to manage and monitor your data.

select
  i as data_identifier,
  s -> 'Operation' -> 'Audit' -> 'FindingsDestination' -> 'S3' -> 'Bucket' as  destination_bucket,
  s -> 'Operation' -> 'Audit' -> 'FindingsDestination' -> 'CloudWatchLogs' -> 'LogGroup'as destination_log_group,
  s -> 'Operation' -> 'Audit' -> 'FindingsDestination' -> 'Firehose' -> 'DeliveryStream'as destination_delivery_stream
from
  aws_cloudwatch_log_group,
  jsonb_array_elements(data_protection_policy -> 'Statement') as s,
  jsonb_array_elements_text(s -> 'DataIdentifier') as i
where
  s ->> 'Sid' = 'audit-policy'
  and name = 'log-group-name';
Error: The corresponding SQLite query is unavailable.

List log groups with no data protection policy

Determine the areas in which data protection policies are not applied to AWS Cloudwatch log groups. This can be useful for identifying potential security vulnerabilities and ensuring all log data is adequately protected.

select
  arn,
  name,
  creation_time
from
  aws_cloudwatch_log_group
where
  data_protection_policy is null;
select
  arn,
  name,
  creation_time
from
  aws_cloudwatch_log_group
where
  data_protection_policy is null;
title description
Steampipe Table: aws_cloudwatch_log_metric_filter - Query AWS CloudWatch log metric filters using SQL
Allows users to query AWS CloudWatch log metric filters to obtain detailed information about each filter, including its name, creation date, associated log group, filter pattern, metric transformations and more.

Table: aws_cloudwatch_log_metric_filter - Query AWS CloudWatch log metric filters using SQL

The AWS CloudWatch Log Metric Filter is a feature within AWS CloudWatch that enables you to extract information from the logs and create custom metrics. These custom metrics can be used for detailed monitoring and alarming based on patterns that might appear in your logs. This is a powerful tool for identifying trends, troubleshooting issues, and setting up real-time monitoring across your AWS resources.

Table Usage Guide

The aws_cloudwatch_log_metric_filter table in Steampipe provides you with information about log metric filters within AWS CloudWatch. This table allows you, as a DevOps engineer, to query filter-specific details, including the associated log group, filter pattern, and metric transformations. You can utilize this table to gather insights on filters, such as filter patterns used, metrics generated from log data, and more. The schema outlines for you the various attributes of the log metric filter, including the filter name, creation date, filter pattern, and associated log group.

Examples

Basic AWS cloudwatch log metric info

Explore the essential characteristics and setup of your AWS CloudWatch log metrics. This query can help you assess the overall configuration and performance metrics of your logs, providing valuable insights for monitoring and optimizing your AWS environment.

select
  name,
  log_group_name,
  creation_time,
  filter_pattern,
  metric_transformation_name,
  metric_transformation_namespace,
  metric_transformation_value
from
  aws_cloudwatch_log_metric_filter;
select
  name,
  log_group_name,
  creation_time,
  filter_pattern,
  metric_transformation_name,
  metric_transformation_namespace,
  metric_transformation_value
from
  aws_cloudwatch_log_metric_filter;

List the cloudwatch metric filters that sends error logs to cloudwatch log groups

Identify instances where specific metric filters are configured to send error logs to Cloudwatch log groups. This allows for effective error tracking and proactive issue resolution in cloud environments.

select
  name,
  log_group_name,
  filter_pattern
from
  aws_cloudwatch_log_metric_filter
where
  filter_pattern ilike '%error%';
select
  name,
  log_group_name,
  filter_pattern
from
  aws_cloudwatch_log_metric_filter
where
  filter_pattern like '%error%';

Number of metric filters attached to each cloudwatch log group

Determine the areas in which Cloudwatch log groups have multiple metric filters attached. This can help in managing and optimizing your AWS Cloudwatch setup by understanding the distribution of metric filters across different log groups.

select
  log_group_name,
  count(name) as metric_filter_count
from
  aws_cloudwatch_log_metric_filter
group by
  log_group_name;
select
  log_group_name,
  count(name) as metric_filter_count
from
  aws_cloudwatch_log_metric_filter
group by
  log_group_name;
title description
Steampipe Table: aws_cloudwatch_log_resource_policy - Query AWS CloudWatch Log Resource Policies using SQL
Allows users to query AWS CloudWatch Log Resource Policies, providing details such as the policy name, policy document, and last updated timestamp.

Table: aws_cloudwatch_log_resource_policy - Query AWS CloudWatch Log Resource Policies using SQL

The AWS CloudWatch Log Resource Policy is a feature of Amazon CloudWatch that allows you to manage resource policies. These policies enable AWS services to perform tasks on your behalf without sharing your security credentials. They are crucial in controlling who can access your logs and what actions they can perform.

Table Usage Guide

The aws_cloudwatch_log_resource_policy table in Steampipe provides you with information about log resource policies within Amazon CloudWatch Logs. This table allows you, as a DevOps engineer, to query policy-specific details, including the policy name, policy document, and last updated timestamp. You can utilize this table to gather insights on policies, such as what actions are allowed or denied, the resources to which the policy applies, and the conditions under which the policy takes effect. The schema outlines for you the various attributes of the CloudWatch Logs resource policy, including the policy name, policy document, and last updated timestamp.

Examples

Basic Info

Explore the updates made to your AWS CloudWatch log resource policies. This query can be used to track policy changes over time, ensuring your settings align with your security and operational requirements.

select
  policy_name,
  last_updated_time,
  jsonb_pretty(policy) as policy,
  jsonb_pretty(policy_std) as policy_std
from
  aws_cloudwatch_log_resource_policy;
select
  policy_name,
  last_updated_time,
  policy,
  policy_std
from
  aws_cloudwatch_log_resource_policy;
title description
Steampipe Table: aws_cloudwatch_log_stream - Query AWS CloudWatch Log Stream using SQL
Allows users to query AWS CloudWatch Log Stream to retrieve detailed information about each log stream within a log group.

Table: aws_cloudwatch_log_stream - Query AWS CloudWatch Log Stream using SQL

The AWS CloudWatch Log Stream is a feature of AWS CloudWatch service that allows you to monitor, store, and access your log files from Amazon EC2 instances, AWS CloudTrail, and other sources. It provides real-time view of your logs and can store the data for as long as you need. It is useful for troubleshooting operational issues and identifying security incidents.

Table Usage Guide

The aws_cloudwatch_log_stream table in Steampipe provides you with information about each log stream within a log group in AWS CloudWatch. This table empowers you, as a DevOps engineer, to query log stream-specific details, including the creation time, the time of the last log event, and the stored bytes. You can utilize this table to gather insights on log streams, such as identifying log streams with the most recent activity, tracking the growth of log data, and more. The schema outlines the various attributes of the log stream, including the log group name, log stream name, creation time, and stored bytes for you.

Important Notes

  • To enhance performance, it is recommended to utilize the optional qualifiers name, log_stream_name_prefix, descending, and order_by for result set limitation.
  • It's important to note that the columns name and log_stream_name_prefix cannot be specified together. If both are included as query parameters in the where clause, the name parameter value will be overridden by the log_stream_name_prefix parameter value in the input.
  • The value of the order_by column can be either LogStreamName or LastEventTime. If the value is LogStreamName, the results are ordered by log stream name. If the value is LastEventTime, the results are ordered by the event time. The default value is LogStreamName. If you order the results by event time, you cannot specify the logStreamNamePrefix parameter. LastEventTimestamp represents the time of the most recent log event in the log stream in CloudWatch Logs. This number is expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. lastEventTimestamp updates on an eventual consistency basis. It typically updates in less than an hour from ingestion, but in rare situations might take longer.
  • If the descending key column value is true, results are returned in descending order. If the value is to false, results are returned in ascending order. The default value is false.

Examples

Basic info

Explore which AWS CloudWatch log streams are active across different regions to manage and monitor your AWS resources effectively. This can help identify any regional patterns or irregularities in your log stream distribution.

select
  name,
  log_group_name,
  region
from
  aws_cloudwatch_log_stream;
select
  name,
  log_group_name,
  region
from
  aws_cloudwatch_log_stream;

Count of log streams per log group

Assess the elements within your AWS Cloudwatch to understand the distribution of log streams across different log groups. This can be useful in identifying groups with excessive streams, potentially indicating areas that require attention or optimization.

select
  log_group_name,
  count(*) as log_stream_count
from
  aws_cloudwatch_log_stream
group by
  log_group_name;
select
  log_group_name,
  count(*) as log_stream_count
from
  aws_cloudwatch_log_stream
group by
  log_group_name;
title description
Steampipe Table: aws_cloudwatch_log_subscription_filter - Query AWS CloudWatch Log Subscription Filters using SQL
Allows users to query AWS CloudWatch Log Subscription Filters, providing information about each subscription filter associated with the specified log group.

Table: aws_cloudwatch_log_subscription_filter - Query AWS CloudWatch Log Subscription Filters using SQL

The AWS CloudWatch Log Subscription Filter is a feature of Amazon CloudWatch Logs that enables you to route data from any log group to an AWS resource for real-time processing of log data. This feature can be used to stream data to AWS Lambda for custom processing or to Amazon Kinesis for storage, analytics, and machine learning. The subscription filter defines the pattern to match in the log events and the destination AWS resource where the matching events should be delivered.

Table Usage Guide

The aws_cloudwatch_log_subscription_filter table in Steampipe provides you with information about AWS CloudWatch Log Subscription Filters. This table enables you, as a DevOps engineer, data analyst, or other technical professional, to query subscription filter-specific details, including the associated log group, filter pattern, and destination ARN. You can utilize this table to gather insights on filters, such as the type of log events each filter is designed to match, the destination to which matched events are delivered, and more. The schema outlines the various attributes of the log subscription filter for you, including the filter name, filter pattern, role ARN, and associated tags.

Examples

Basic info

Gain insights into the creation and configuration of your AWS CloudWatch log subscription filters. This can be used to monitor and analyze the logs for patterns, ensuring efficient resource utilization and system health.

select
  name,
  log_group_name,
  creation_time,
  filter_pattern,
  destination_arn
from
  aws_cloudwatch_log_subscription_filter;
select
  name,
  log_group_name,
  creation_time,
  filter_pattern,
  destination_arn
from
  aws_cloudwatch_log_subscription_filter;

List the cloudwatch subscription filters that sends error logs to cloudwatch log groups

Identify instances where Cloudwatch subscription filters are set up to send error logs to specific log groups, which can be beneficial in maintaining system health and troubleshooting issues.

select
  name,
  log_group_name,
  filter_pattern
from
  aws_cloudwatch_log_subscription_filter
where
  filter_pattern ilike '%error%';
select
  name,
  log_group_name,
  filter_pattern
from
  aws_cloudwatch_log_subscription_filter
where
  filter_pattern like '%error%';

Number of subscription filters attached to each cloudwatch log group

Analyze your AWS Cloudwatch setup to understand the distribution of subscription filters across different log groups. This can help in optimizing log management by identifying log groups that may have too many or too few subscription filters.

select
  log_group_name,
  count(name) as subscription_filter_count
from
  aws_cloudwatch_log_subscription_filter
group by
  log_group_name;
select
  log_group_name,
  count(name) as subscription_filter_count
from
  aws_cloudwatch_log_subscription_filter
group by
  log_group_name;
title description
Steampipe Table: aws_cloudwatch_metric - Query AWS CloudWatch Metrics using SQL
Allows users to query AWS CloudWatch Metrics to gather information about the performance of their AWS resources and applications.

Table: aws_cloudwatch_metric - Query AWS CloudWatch Metrics using SQL

The AWS CloudWatch Metrics is a feature of Amazon CloudWatch that allows you to monitor, store, and access your log files from Amazon Elastic Compute Cloud (EC2) instances, AWS CloudTrail, Route 53, and other sources. It provides data and actionable insights to monitor your applications, understand and respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. By using SQL queries with CloudWatch Metrics, you can gain a deeper understanding of your system's operational status.

Table Usage Guide

The aws_cloudwatch_metric table in Steampipe provides you with information about CloudWatch Metrics within AWS CloudWatch. This table allows you, as a DevOps engineer, to query metric-specific details, including metric names, namespaces, dimensions, and statistics. You can utilize this table to gather insights on metrics, such as tracking the CPU usage of an EC2 instance, monitoring the latency of an ELB, or even the request count of an API Gateway. The schema outlines the various attributes of the CloudWatch Metric for you, including the metric name, namespace, dimensions, statistics, and associated metadata.

Important Notes

  • You can include up to 10 dimensions in the dimensions_filter column.

Examples

Basic info

Explore the metrics and their associated namespaces in your AWS CloudWatch service. This can help you understand the different performance indicators being monitored and their corresponding AWS services, providing a comprehensive overview of your system's performance and health.

select
  metric_name,
  namespace,
  dimensions
from
  aws_cloudwatch_metric;
select
  metric_name,
  namespace,
  dimensions
from
  aws_cloudwatch_metric;

List EBS metrics

Explore the performance metrics related to Amazon Elastic Block Store (EBS) to gain insights into its operations and efficiency. This can help in identifying potential issues and optimizing resource usage.

select
  metric_name,
  namespace,
  dimensions
from
  aws_cloudwatch_metric
where
  namespace = 'AWS/EBS';
select
  metric_name,
  namespace,
  dimensions
from
  aws_cloudwatch_metric
where
  namespace = 'AWS/EBS';

List EBS VolumeReadOps metrics

Discover the segments that track the read operations on your Elastic Block Store (EBS) volumes. This is useful for monitoring the performance and usage patterns of your EBS volumes in AWS environment.

select
  metric_name,
  namespace,
  dimensions
from
  aws_cloudwatch_metric
where
  namespace = 'AWS/EBS'
  and metric_name = 'VolumeReadOps';
select
  metric_name,
  namespace,
  dimensions
from
  aws_cloudwatch_metric
where
  namespace = 'AWS/EBS'
  and metric_name = 'VolumeReadOps';

List metrics for a specific Redshift cluster

Explore the performance metrics of a specific Redshift cluster to gain insights into its operational efficiency and resource utilization. This can be useful in monitoring the cluster's health and optimizing its performance.

select
  metric_name,
  namespace,
  dimensions
from
  aws_cloudwatch_metric
where
  dimensions_filter = '[
    {"Name": "ClusterIdentifier", "Value": "my-cluster-1"}
  ]'::jsonb;
select
  metric_name,
  namespace,
  dimensions
from
  aws_cloudwatch_metric
where
  json_extract(dimensions_filter, '$[0].Name') = 'ClusterIdentifier' 
  and json_extract(dimensions_filter, '$[0].Value') = 'my-cluster-1';

List EC2 API metrics

Explore which API metrics are available for the EC2 service in AWS Cloudwatch. This is useful for monitoring and optimizing the performance of your EC2 instances.

select
  metric_name,
  namespace,
  dimensions
from
  aws_cloudwatch_metric
where
  dimensions_filter = '[
    {"Name": "Type", "Value": "API"},
    {"Name": "Service", "Value": "EC2"}
  ]'::jsonb;
select
  metric_name,
  namespace,
  dimensions
from
  aws_cloudwatch_metric
where
  json_extract(dimensions_filter, '$[0].Name') = "Type" and json_extract(dimensions_filter, '$[0].Value') = "API"
  and json_extract(dimensions_filter, '$[1].Name') = "Service" and json_extract(dimensions_filter, '$[1].Value') = "EC2";
title description
Steampipe Table: aws_cloudwatch_metric_data_point - Query AWS CloudWatch MetricDataPoints using SQL
Allows users to query AWS CloudWatch MetricDataPoints to fetch detailed information about the data points for a defined metric.

Table: aws_cloudwatch_metric_data_point - Query AWS CloudWatch MetricDataPoints using SQL

The AWS CloudWatch MetricDataPoints is a feature of Amazon CloudWatch that allows you to monitor, store, and access your AWS resources' data in the form of logs and metrics. This feature provides real-time data and insights that can help you optimize the performance and resource utilization of your applications. It also allows you to set alarms and react to changes in your AWS resources, making it easier to troubleshoot issues and discover trends.

Table Usage Guide

The aws_cloudwatch_metric_data_point table in Steampipe provides you with information about MetricDataPoints within AWS CloudWatch. This table enables you, as a DevOps engineer, to query specific details about the data points for a defined metric, including the timestamp, sample count, sum, minimum, and maximum values. You can utilize this table to gather insights on metrics, such as tracking the number of requests to an application over time, monitoring the CPU usage and network traffic of EC2 instances, and more. The schema outlines the various attributes of the MetricDataPoint, including the average, sample count, sum, minimum, and maximum values, along with the timestamp of the data point.

Important Notes This table provides metric data points for the specified id. The maximum number of data points returned from a single call is 100,800.

  • You must specify id, and expression or id, and metric_stat in a where clause in order to use this table.
  • By default, this table will provide data for the last 24hrs. You can provide the timestamp value in the following ways to fetch data in a range. The examples below can guide you.
    • timestamp >= ‘2023-03-11T00:00:00Z’ and timestamp <= ‘2023-03-15T00:00:00Z’
    • timestamp between ‘2023-03-11T00:00:00Z’ and ‘2023-03-15T00:00:00Z’
    • timestamp > ‘2023-03-15T00:00:00Z’ (The data will be fetched from the provided time to the current time)
    • timestamp < ‘2023-03-15T00:00:00Z’ (The data will be fetched from one day before the provided time to the provided time)
  • It's recommended that you specify the period column in the query to optimize the table output. If you do not specify the timestamp then the default value for period is 60 seconds. If you specify the timestamp then the period will be calculated based on the duration mentioned (here).
  • Using this table adds to the cost of your monthly bill from AWS. Optimizations have

Examples

Aggregate maximum CPU utilization of all EC2 instances for the last 24 hrs

Determine the peak CPU usage of all EC2 instances in the past day. This query can be used to monitor system performance and identify potential issues related to high CPU utilization.

select
  id,
  label,
  timestamp,
  period,
  value,
  expression
from
  aws_cloudwatch_metric_data_point
where
  id = 'm1'
  and expression = 'select max(CPUUtilization) from schema("AWS/EC2", InstanceId)'
order by
  timestamp;
select
  id,
  label,
  timestamp,
  period,
  value,
  expression
from
  aws_cloudwatch_metric_data_point
where
  id = 'm1'
  and expression = 'select max(CPUUtilization) from schema("AWS/EC2", InstanceId)'
order by
  timestamp;

Calculate error rate on the provided custom metric ID for the last 24 hrs

This query is useful for monitoring the error rate on a specific custom metric over the last 24 hours. It can help identify potential issues or anomalies in your system, allowing for timely troubleshooting and maintenance.

select
  id,
  label,
  timestamp,
  period,
  value,
  expression
from
  aws_cloudwatch_metric_data_point
where
  id = 'e1'
  and expression = 'SUM(METRICS(''error''))'
order by
  timestamp;
select
  id,
  label,
  timestamp,
  period,
  value,
  expression
from
  aws_cloudwatch_metric_data_point
where
  id = 'e1'
  and expression = 'SUM(METRICS(''error''))'
order by
  timestamp;

CPU average utilization of multiple EC2 instances over 80% for the last 5 days

Identify instances where the average CPU utilization of multiple EC2 instances has exceeded 80% in the past 5 days. This can be useful in monitoring resource usage and identifying potential performance issues.

select
  id,
  label,
  timestamp,
  period,
  round(value::numeric, 2) as avg_cpu,
  metric_stat
from
  aws_cloudwatch_metric_data_point
where
  id = 'm1'
  and value > 80
  and timestamp >= now() - interval '5 day'
  and metric_stat = '{
    "Metric": {
    "Namespace": "AWS/EC2",
    "MetricName": "CPUUtilization",
    "Dimensions": [
      {
        "Name": "InstanceId",
        "Value": "i-0353536c53f7c8235"
      },
      {
        "Name": "InstanceId",
        "Value": "i-0dd7043e0f6f0f36d"
      }
    ]},
    "Stat": "Average"}'
order by
  timestamp;
select
  id,
  label,
  timestamp,
  period,
  round(value, 2) as avg_cpu,
  metric_stat
from
  aws_cloudwatch_metric_data_point
where
  id = 'm1'
  and value > 80
  and timestamp >= datetime('now','-5 day')
  and json_extract(metric_stat, '$.Metric.Namespace') = 'AWS/EC2'
  and json_extract(metric_stat, '$.Metric.MetricName') = 'CPUUtilization'
  and json_extract(metric_stat, '$.Metric.Dimensions[0].Name') = 'InstanceId'
  and json_extract(metric_stat, '$.Metric.Dimensions[0].Value') = 'i-0353536c53f7c8235'
  and json_extract(metric_stat, '$.Metric.Dimensions[1].Name') = 'InstanceId'
  and json_extract(metric_stat, '$.Metric.Dimensions[1].Value')

Intervals where an EBS volume exceed 1000 average read ops daily

Explore instances where an EBS volume exceeds a daily average of 1000 read operations. This can be useful in understanding the performance and load on your EBS volumes, helping you make informed decisions about capacity planning and resource allocation.

select
  id,
  label,
  timestamp,
  value,
  metric_stat
from
  aws_cloudwatch_metric_data_point
where
  id = 'm1'
  and value > 1000
  and period = 86400
  and scan_by = 'TimestampDescending'
  and timestamp between '2023-03-10T00:00:00Z' and '2023-03-16T00:00:00Z'
  and metric_stat = '{
    "Metric": {
    "Namespace": "AWS/EBS",
    "MetricName": "VolumeReadOps",
    "Dimensions": [
      {
        "Name": "VolumeId",
        "Value": "vol-00607053b218c6d74"
      }
    ]},
    "Stat": "Average"}';
select
  id,
  label,
  timestamp,
  value,
  metric_stat
from
  aws_cloudwatch_metric_data_point
where
  id = 'm1'
  and value > 1000
  and period = 86400
  and scan_by = 'TimestampDescending'
  and timestamp between '2023-03-10T00:00:00Z' and '2023-03-16T00:00:00Z'
  and json_extract(metric_stat, '$.Metric.Namespace') = 'AWS/EBS'
  and json_extract(metric_stat, '$.Metric.MetricName') = 'VolumeReadOps'
  and json_extract(metric_stat, '$.Metric.Dimensions[0].Name') = 'VolumeId'
  and json_extract(metric_stat, '$.Metric.Dimensions[0].Value') = 'vol-00607053b218c6d74'
  and json_extract(metric_stat, '$.Stat

CacheHit sum below 10 of an elasticache cluster for the last 7 days

Determine the performance of an ElastiCache cluster over the past week by identifying instances where cache hit sums were less than 10. This can be useful for analyzing the effectiveness of your cache configuration and identifying potential areas for improvement.

select
  id,
  label,
  timestamp,
  value,
  metric_stat
from
  aws_cloudwatch_metric_data_point
where
  id = 'e1'
  and value < 10
  and timestamp >= now() - interval '7 day'
  and metric_stat = '{
    "Metric": {
    "Namespace": "AWS/ElastiCache",
    "MetricName": "CacheHits",
    "Dimensions": [
      {
        "Name": "CacheClusterId",
        "Value": "cluster-delete-001"
      }
    ]},
    "Stat": "Sum"}'
order by
  timestamp;
select
  id,
  label,
  timestamp,
  value,
  metric_stat
from
  aws_cloudwatch_metric_data_point
where
  id = 'e1'
  and value < 10
  and timestamp >= datetime('now', '-7 days')
  and metric_stat = '{
    "Metric": {
    "Namespace": "AWS/ElastiCache",
    "MetricName": "CacheHits",
    "Dimensions": [
      {
        "Name": "CacheClusterId",
        "Value": "cluster-delete-001"
      }
    ]},
    "Stat": "Sum"}'
order by
  timestamp;

Maximum Bucket size daily statistics of an S3 bucket for an account

Explore the maximum storage usage of a specific S3 bucket in your AWS account within a specific timeframe. This can help manage storage capacity and understand usage patterns.

select
  id,
  label,
  timestamp,
  value,
  metric_stat
from
  aws_cloudwatch_metric_data_point
where
  id = 'e1'
  and source_account_id = '533743456432100'
  and timestamp between '2023-03-10T00:00:00Z' and '2023-03-16T00:00:00Z'
  and metric_stat = '{
    "Metric": {
    "Namespace": "AWS/S3",
    "MetricName": "BucketSizeBytes",
    "Dimensions": [
      {
        "Name": "BucketName",
        "Value": "steampipe-test"
      },
      {
        "Name": "StorageType",
        "Value": "StandardStorage"
      }
    ]},
    "Stat": "Maximum"}'
order by
  timestamp;
select
  id,
  label,
  timestamp,
  value,
  metric_stat
from
  aws_cloudwatch_metric_data_point
where
  id = 'e1'
  and source_account_id = '533743456432100'
  and timestamp between '2023-03-10T00:00:00Z' and '2023-03-16T00:00:00Z'
  and json_extract(metric_stat, '$.Metric.Namespace') = 'AWS/S3'
  and json_extract(metric_stat, '$.Metric.MetricName') = 'BucketSizeBytes'
  and json_extract(metric_stat, '$.Metric.Dimensions[0].Name') = 'BucketName'
  and json_extract(metric_stat, '$.Metric.Dimensions[0].Value') = 'steampipe-test'
  and json_extract(metric_stat, '$.Metric.Dimensions[1].Name') = 'StorageType'
  and json_extract(metric_stat
title description
Steampipe Table: aws_cloudwatch_metric_statistic_data_point - Query AWS CloudWatch Metric Statistics Data Point using SQL
Allows users to query AWS CloudWatch Metric Statistics Data Point to obtain detailed metrics data.

Table: aws_cloudwatch_metric_statistic_data_point - Query AWS CloudWatch Metric Statistics Data Point using SQL

The AWS CloudWatch Metric Statistics Data Point is a feature of the Amazon CloudWatch service. It allows you to retrieve statistical data about your AWS resources that is collected by CloudWatch. This statistical data can be used for monitoring, troubleshooting, and setting alarms for when specific thresholds are met.

Table Usage Guide

The aws_cloudwatch_metric_statistic_data_point table in Steampipe provides you with information about the data points for a specified metric in AWS CloudWatch. This table allows you, as a DevOps engineer, to query detailed metrics data, including timestamps, samples count, maximum, minimum, and average values. You can utilize this table to gather insights on metric data points, such as observing trends, identifying peaks or anomalies, and monitoring the overall performance of AWS resources. The schema outlines the various attributes of the metric data points, including the namespace, metric name, dimensions, and the period, start time, and end time of the data points.

Important Notes The maximum number of data points that can be returned from a single call is 1,440. If you request more than 1,440 data points, CloudWatch will return an error. To reduce the number of data points, you can narrow the specified time range and make multiple requests across adjacent time ranges, or you can increase the specified period. Please note that data points are not returned in chronological order.

  • If you need to fetch more than 1440 data points, you should use the aws_cloudwatch_metric_data_point table.

  • You must specify metric_name, and namespace in a where clause in order to use this table.

  • To fetch aggregate statistics data, dimensions is not required. However, except for aggregate statistics, you must always pass dimensions in the query; the examples below can guide you.

  • By default, this table will provide data for the last 24hrs. You can give the timestamp value in the below ways to fetch data in a range. The examples below can guide you.

    • timestamp >= ‘2023-03-11T00:00:00Z’ and timestamp <= ‘2023-03-15T00:00:00Z’
    • timestamp between ‘2023-03-11T00:00:00Z’ and ‘2023-03-15T00:00:00Z’
    • timestamp > ‘2023-03-15T00:00:00Z’ (The data will be fetched from the provided time to the current time)
    • timestamp < ‘2023-03-15T00:00:00Z’ (The data will be fetched from one day before the provided time to the provided time)
  • We recommend specifying the period column in the query to optimize the table output. If you do not specify the timestamp then the default value for period is 60 seconds. If you specify the timestamp then the period will be calculated based on the duration to provide a good spread under the 1440 datapoints.

Examples

Aggregate CPU utilization of all EC2 instances for the last 24 hrs

Explore the extent of CPU usage across all EC2 instances over the past day. This is useful to monitor system performance, identify potential bottlenecks, and plan for capacity upgrades.

select
  metric_name,
  timestamp,
  round(minimum::numeric, 2) as min_cpu,
  round(maximum::numeric, 2) as max_cpu,
  round(average::numeric, 2) as avg_cpu,
  sum,
  sample_count
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/EC2'
  and metric_name = 'CPUUtilization'
order by
  timestamp;
select
  metric_name,
  timestamp,
  round(minimum, 2) as min_cpu,
  round(maximum, 2) as max_cpu,
  round(average, 2) as avg_cpu,
  sum,
  sample_count
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/EC2'
  and metric_name = 'CPUUtilization'
order by
  timestamp;

CPU average utilization of an EC2 instance over 80% for the last 5 days

Determine the instances where the average utilization of a specific EC2 instance has exceeded 80% in the past 5 days. This query is useful for monitoring system performance and identifying potential issues with resource allocation.

select
  jsonb_pretty(dimensions) as dimensions,
  timestamp,
  round(average::numeric, 2) as avg_cpu
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/EC2'
  and metric_name = 'CPUUtilization'
  and average > 80
  and timestamp >= now() - interval '5 day'
  and dimensions = '[
    {"Name": "InstanceId", "Value": "i-0dd7043e0f6f0f36d"}
    ]'
order by
  timestamp;
select
  json_pretty(dimensions) as dimensions,
  timestamp,
  round(average, 2) as avg_cpu
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/EC2'
  and metric_name = 'CPUUtilization'
  and average > 80
  and timestamp >= datetime('now', '-5 day')
  and dimensions = '[
    {"Name": "InstanceId", "Value": "i-0dd7043e0f6f0f36d"}
    ]'
order by
  timestamp;

Intervals where a volume exceed 1000 average read ops

Identify instances where the average read operations on a specific volume surpasses a set threshold within a defined timeframe. This query aids in analyzing periods of high read operations, helping to optimize resource usage and performance in AWS EBS.

select
  jsonb_pretty(dimensions) as dimensions,
  timestamp,
  average
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/EBS'
  and metric_name = 'VolumeReadOps'
  and average > 1000
  and timestamp between '2023-03-10T00:00:00Z' and '2023-03-16T00:00:00Z'
  and period = 300
  and dimensions = '[
    {"Name": "VolumeId", "Value": "vol-00607053b218c6d74"}
    ]'
order by
  timestamp;
select
  dimensions,
  timestamp,
  average
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/EBS'
  and metric_name = 'VolumeReadOps'
  and average > 1000
  and timestamp between '2023-03-10T00:00:00Z' and '2023-03-16T00:00:00Z'
  and period = 300
  and json_extract(dimensions, '

### CacheHit sum below 10 of an elasticache cluster for for the last 7 days
Analyze the performance of an ElastiCache cluster by tracking instances where cache hits were less than 10 over the past week. This could be useful to identify potential issues with the cache's configuration or usage patterns.

```sql+postgres
select
  jsonb_pretty(dimensions) as dimensions,
  timestamp,
  sum
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/ElastiCache'
  and metric_name = 'CacheHits'
  and sum < 10
  and timestamp >= now() - interval '7 day'
  and dimensions = '[
    {"Name": "CacheClusterId", "Value": "cluster-delete-001"}
    ]'
order by
  timestamp;
select
  json_pretty(dimensions) as dimensions,
  timestamp,
  sum
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/ElastiCache'
  and metric_name = 'CacheHits'
  and sum < 10
  and timestamp >= datetime('now', '-7 day')
  and dimensions = '[
    {"Name": "CacheClusterId", "Value": "cluster-delete-001"}
    ]'
order by
  timestamp;

Lambda function daily maximum duration over 100 milliseconds

Discover the instances when your AWS Lambda function's maximum daily duration exceeds 100 milliseconds within a specific time frame. This can be useful for identifying potential performance issues or bottlenecks in your application.

select
  jsonb_pretty(dimensions) as dimensions,
  timestamp,
  maximum
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/Lambda'
  and metric_name = 'Duration'
  and maximum > 100
  and timestamp >= '2023-02-15T00:00:00Z'
  and timestamp <= '2023-03-15T00:00:00Z'
  and period = 86400
  and dimensions = '[
    {"Name": "FunctionName", "Value": "test"}
    ]'
order by
  timestamp;
select
  json_pretty(dimensions) as dimensions,
  timestamp,
  maximum
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/Lambda'
  and metric_name = 'Duration'
  and maximum > 100
  and timestamp >= '2023-02-15T00:00:00Z'
  and timestamp <= '2023-03-15T00:00:00Z'
  and period = 86400
  and dimensions = '[
    {"Name": "FunctionName", "Value": "test"}
    ]'
order by
  timestamp;

CPU average utilization of an RDS DB instance over 80% for the last 30 days

This query is used to monitor the performance of an RDS DB instance by tracking its CPU utilization. If the average CPU usage exceeds 80% over the past 30 days, it may indicate a need for more resources or optimization to prevent potential system slowdowns or failures.

select
  jsonb_pretty(dimensions) as dimensions,
  timestamp,
  round(average::numeric, 2) as avg_cpu
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/RDS'
  and metric_name = 'CPUUtilization'
  and average > 80
  and timestamp >= now() - interval '30 day'
  and dimensions = '[
    {"Name": "DBInstanceIdentifier", "Value": "database-1"}
    ]'
order by
  timestamp;
select
  json_pretty(dimensions) as dimensions,
  timestamp,
  round(average, 2) as avg_cpu
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/RDS'
  and metric_name = 'CPUUtilization'
  and average > 80
  and timestamp >= datetime('now', '-30 day')
  and dimensions = '[
    {"Name": "DBInstanceIdentifier", "Value": "database-1"}
    ]'
order by
  timestamp;

Maximum Bucket size daily statistics of an S3 bucket

Explore the daily storage usage of a specific S3 bucket over a given time period. This is useful for tracking storage trends and planning for future capacity needs.

select
  jsonb_pretty(dimensions) as dimensions,
  timestamp,
  minimum
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/S3'
  and metric_name = 'BucketSizeBytes'
  and timestamp between '2023-03-6T00:00:00Z' and '2023-03-15T00:00:00Z'
  and period = 86400
  and dimensions = '[
    {"Name": "BucketName", "Value": "steampipe-test"},
    {"Name": "StorageType", "Value": "StandardStorage"}
    ]'
order by
  timestamp;
select
  json_pretty(dimensions) as dimensions,
  timestamp,
  minimum
from
  aws_cloudwatch_metric_statistic_data_point
where
  namespace = 'AWS/S3'
  and metric_name = 'BucketSizeBytes'
  and timestamp between '2023-03-6T00:00:00Z' and '2023-03-15T00:00:00Z'
  and period = 86400
  and dimensions = '[
    {"Name": "BucketName", "Value": "steampipe-test"},
    {"Name": "StorageType", "Value": "StandardStorage"}
    ]'
order by
  timestamp;
title description
Steampipe Table: aws_codeartifact_domain - Query AWS CodeArtifact Domains using SQL
Allows users to query AWS CodeArtifact Domains for details such as domain ownership, encryption key, and policy information.

Table: aws_codeartifact_domain - Query AWS CodeArtifact Domains using SQL

The AWS CodeArtifact Domain is a fundamental resource within the AWS CodeArtifact service, which is a fully managed artifact repository service. It enables you to easily store, publish, and share software packages in a scalable and secure manner. Each domain allows for the management and organization of your package assets across multiple repositories.

Table Usage Guide

The aws_codeartifact_domain table in Steampipe provides you with information about domains within AWS CodeArtifact. This table allows you, as a DevOps engineer, to query domain-specific details, including domain ownership, encryption key, and associated policy information. You can utilize this table to gather insights on domains, such as who owns a domain, what encryption key is used, and what policies are applied. The schema outlines the various attributes of the AWS CodeArtifact domain for you, including the domain ARN, domain owner, encryption key, and associated policies.

Examples

Basic info

Discover the segments that provide insights into the creation, ownership, and status of AWS CodeArtifact domains, in order to better understand and manage your resources. This could be beneficial for maintaining security protocols and efficient resource allocation.

select
  arn,
  created_time,
  encryption_key,
  status,
  owner,
  tags
from
  aws_codeartifact_domain;
select
  arn,
  created_time,
  encryption_key,
  status,
  owner,
  tags
from
  aws_codeartifact_domain;

List unencrypted domains

Identify instances where AWS CodeArtifact domains are unencrypted, providing a useful method to highlight potential security vulnerabilities within your AWS infrastructure. This can aid in enhancing data protection measures by pinpointing areas that require encryption implementation.

select
  arn,
  created_time,
  status,
  s3_bucket_arn,
  tags
from
  aws_codeartifact_domain
where
  encryption_key is null;
select
  arn,
  created_time,
  status,
  s3_bucket_arn,
  tags
from
  aws_codeartifact_domain
where
  encryption_key is null;

List inactive domains

Determine the areas in which domains are not actively used within the AWS CodeArtifact service. This can be useful in identifying unused resources, potentially helping to reduce costs and optimize resource management.

select
  arn,
  created_time,
  status,
  s3_bucket_arn,
  tags
from
  aws_codeartifact_domain
where
  status != 'Active';
select
  arn,
  created_time,
  status,
  s3_bucket_arn,
  tags
from
  aws_codeartifact_domain
where
  status != 'Active';

List domain policy statements that grant external access

Explore which domain policy statements in your AWS CodeArtifact domain allow external access. This is useful to identify potential security vulnerabilities and ensure that only authorized entities have access to your domain.

select
  arn,
  p as principal,
  a as action,
  s ->> 'Effect' as effect
from
  aws_codeartifact_domain,
  jsonb_array_elements(policy_std -> 'Statement') as s,
  jsonb_array_elements_text(s -> 'Principal' -> 'AWS') as p,
  string_to_array(p, ':') as pa,
  jsonb_array_elements_text(s -> 'Action') as a
where
  s ->> 'Effect' = 'Allow'
  and (
    pa [5] != account_id
    or p = '*'
  );
Error: The corresponding SQLite query is unavailable.

Get S3 bucket details associated with each domain

Determine the areas in which S3 bucket details are linked with each domain to assess the elements within the domain's encryption key and public bucket policy. This can be useful to gain insights into the security configuration of your AWS CodeArtifact domains and associated S3 buckets.

select
  d.arn as domain_arn,
  b.arn as bucket_arn,
  d.encryption_key domain_encryption_key,
  bucket_policy_is_public
from
  aws_codeartifact_domain d
  join aws_s3_bucket b on d.s3_bucket_arn = b.arn;
select
  d.arn as domain_arn,
  b.arn as bucket_arn,
  d.encryption_key as domain_encryption_key,
  bucket_policy_is_public
from
  aws_codeartifact_domain d
  join aws_s3_bucket b on d.s3_bucket_arn = b.arn;

Get KMS key details associated with each the domain

Explore which domains are associated with specific KMS keys to gain insights into their encryption status and management. This can help in assessing the security configuration of your AWS CodeArtifact domains.

select
  d.arn as domain_arn,
  d.encryption_key domain_encryption_key,
  key_manager,
  key_state
from
  aws_codeartifact_domain d
  join aws_kms_key k on d.encryption_key = k.arn;
select
  d.arn as domain_arn,
  d.encryption_key as domain_encryption_key,
  key_manager,
  key_state
from
  aws_codeartifact_domain d
  join aws_kms_key k on d.encryption_key = k.arn;

List domains using customer managed encryption

Discover the segments that use customer-managed encryption in your AWS CodeArtifact domains. This can be beneficial for assessing your security protocols and identifying areas where you're maintaining direct control over your encryption keys.

select
  d.arn as domain_arn,
  d.encryption_key domain_encryption_key,
  key_manager,
  key_state
from
  aws_codeartifact_domain d
  join aws_kms_key k on d.encryption_key = k.arn
where 
  key_manager = 'CUSTOMER';
select
  d.arn as domain_arn,
  d.encryption_key as domain_encryption_key,
  key_manager,
  key_state
from
  aws_codeartifact_domain as d
  join aws_kms_key as k on d.encryption_key = k.arn
where 
  key_manager = 'CUSTOMER';
title description
Steampipe Table: aws_codeartifact_repository - Query AWS CodeArtifact Repository using SQL
Allows users to query AWS CodeArtifact Repository data, including details about the repository, its domain ownership, and associated metadata.

Table: aws_codeartifact_repository - Query AWS CodeArtifact Repository using SQL

The AWS CodeArtifact Repository is a fully managed software artifact repository service that makes it easier for organizations to securely store, publish, and share packages used in their software development process. AWS CodeArtifact eliminates the need for you to set up, operate, and scale the infrastructure for your artifact repositories, allowing you to focus on your software development. It works with commonly used package managers and build tools, and it integrates with CI/CD pipelines to seamlessly publish packages.

Table Usage Guide

The aws_codeartifact_repository table in Steampipe provides you with information about repositories within AWS CodeArtifact. This table allows you, as a DevOps engineer, to query repository specific details, including the repository's domain owner, domain name, repository name, administrator account, and associated metadata. You can utilize this table to gather insights on repositories, such as their ownership, associated domains, and more. The schema outlines the various attributes of the CodeArtifact repository for you, including the ARN, repository description, domain owner, domain name, and associated tags.

Examples

Basic info

Explore which AWS CodeArtifact repositories are owned by different domain owners and identify instances where specific tags and upstreams are used. This can help in gaining insights into the organization and management of your AWS resources.

select
  arn,
  domain_name,
  domain_owner,
  upstreams,
  tags
from
  aws_codeartifact_repository;
select
  arn,
  domain_name,
  domain_owner,
  upstreams,
  tags
from
  aws_codeartifact_repository;

List repositories with endpoints

Identify instances where repositories have specified endpoints. This could be useful in managing and organizing your AWS CodeArtifact repositories, by focusing on those repositories that have assigned endpoints.

select
  arn,
  domain_name,
  domain_owner,
  tags,
  repository_endpoint
from
  aws_codeartifact_repository
where
  repository_endpoint is not null;
select
  arn,
  domain_name,
  domain_owner,
  tags,
  repository_endpoint
from
  aws_codeartifact_repository
where
  repository_endpoint is not null;

List repository policy statements that grant external access

This example is used to identify any repository policy statements in the AWS CodeArtifact service that may be granting access to external entities. This is useful for auditing security and ensuring that no unauthorized access is being permitted.

select
  arn,
  p as principal,
  a as action,
  s ->> 'Effect' as effect
from
  aws_codeartifact_repository,
  jsonb_array_elements(policy_std -> 'Statement') as s,
  jsonb_array_elements_text(s -> 'Principal' -> 'AWS') as p,
  string_to_array(p, ':') as pa,
  jsonb_array_elements_text(s -> 'Action') as a
where
  s ->> 'Effect' = 'Allow'
  and (
    pa [5] != account_id
    or p = '*'
  );
Error: The corresponding SQLite query is unavailable.

Get upstream package details associated with each repository

Analyze the settings to understand the association between each repository and its corresponding upstream package details in the AWS CodeArtifact service. This can aid in managing dependencies and ensuring the correct version of a package is being used.

select
  arn,
  domain_name,
  domain_owner,
  u ->> 'RepositoryName' as upstream_repo_name
from
  aws_codeartifact_repository,
  jsonb_array_elements(upstreams) u;
select
  arn,
  domain_name,
  domain_owner,
  json_extract(u.value, '$.RepositoryName') as upstream_repo_name
from
  aws_codeartifact_repository,
  json_each(upstreams) u;
title description
Steampipe Table: aws_codebuild_build - Query AWS CodeBuild Build using SQL
Allows users to query AWS CodeBuild Build to retrieve information about AWS CodeBuild projects' builds.

Table: aws_codebuild_build - Query AWS CodeBuild Build using SQL

AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. It allows you to build and test code with continuous scaling and enables you to pay only for the build time you use. CodeBuild eliminates the need to provision, manage, and scale your own build servers.

Table Usage Guide

The aws_codebuild_build table in Steampipe provides you with information about builds in AWS CodeBuild. This table allows you as a DevOps engineer to query build-specific details, including build statuses, source details, build environment, and associated metadata. You can utilize this table to gather insights on builds, such as build status, source version, the duration of the build, and more. The schema outlines for you the various attributes of the CodeBuild build, including the build ID, build status, start and end time, and associated tags.

Examples

Basic info

Explore which AWS CodeBuild projects have been completed and gain insights into their build status, duration, and other related details. This can help in managing and optimizing the build processes in your AWS environment.

select
  arn,
  id,
  build_complete,
  timeout_in_minutes,
  project_name,
  build_status,
  encryption_key,
  end_time,
  region
from
  aws_codebuild_build;
select
  arn,
  id,
  build_complete,
  timeout_in_minutes,
  project_name,
  build_status,
  encryption_key,
  end_time,
  region
from
  aws_codebuild_build;

List encrypted build output artifacts

Discover the segments that include encrypted build output artifacts, allowing you to focus on the areas where secure data is being used in your AWS CodeBuild projects.

select
  arn,
  id,
  encryption_key
from
  aws_codebuild_build
where
  encryption_key is not null;
select
  arn,
  id,
  encryption_key
from
  aws_codebuild_build
where
  encryption_key is not null;

List complete builds

Explore which AWS CodeBuild projects have been fully built. This is useful for assessing project progress and identifying any projects that may still be in progress or have yet to begin.

select
  id,
  arn,
  artifacts,
  build_complete
from
  aws_codebuild_build
where
  build_complete;
select
  id,
  arn,
  artifacts,
  build_complete
from
  aws_codebuild_build
where
  build_complete = 1;

List VPC configuration details of builds

Explore the security aspects of your AWS CodeBuild projects by examining the Virtual Private Cloud (VPC) configurations. This can help you understand and manage the security group IDs, subnets, and VPC IDs associated with your builds.

select
  id,
  arn,
  vpc_config ->> 'SecurityGroupIds' as security_group_id,
  vpc_config ->> 'Subnets' as subnets,
  vpc_config ->> 'VpcId' as vpc_id
from
  aws_codebuild_build;
select
  id,
  arn,
  json_extract(vpc_config, '$.SecurityGroupIds') as security_group_id,
  json_extract(vpc_config, '$.Subnets') as subnets,
  json_extract(vpc_config, '$.VpcId') as vpc_id
from
  aws_codebuild_build;

List artifact details of builds

This query is useful to gain insights into the specific details of artifacts associated with various builds in AWS CodeBuild. It helps in understanding the access level, encryption status, and other crucial aspects of these artifacts, which can aid in better management and security of your build artifacts.

select
  id,
  arn,
  artifacts ->> 'ArtifactIdentifier' as artifact_id,
  artifacts ->> 'BucketOwnerAccess' as bucket_owner_access,
  artifacts ->> 'EncryptionDisabled' as encryption_disabled,
  artifacts ->> 'OverrideArtifactName' as override_artifact_name
from
  aws_codebuild_build;
select
  id,
  arn,
  json_extract(artifacts, '$.ArtifactIdentifier') as artifact_id,
  json_extract(artifacts, '$.BucketOwnerAccess') as bucket_owner_access,
  json_extract(artifacts, '$.EncryptionDisabled') as encryption_disabled,
  json_extract(artifacts, '$.OverrideArtifactName') as override_artifact_name
from
  aws_codebuild_build;

Get environment details of builds

Explore the specific environmental aspects of your builds in AWS CodeBuild. This can help you understand the settings like compute type, image, and credentials used, which can be useful for troubleshooting or optimizing your build processes.

select
  id,
  environment ->> 'Certificate' as environment_certificate,
  environment ->> 'ComputeType' as environment_compute_type,
  environment ->> 'EnvironmentVariables' as environment_variables,
  environment ->> 'Image' as environment_image,
  environment ->> 'ImagePullCredentialsType' as environment_image_pull_credentials_type,
  environment ->> 'PrivilegedMode' as environment_privileged_mode,
  environment ->> 'RegistryCredential' as environment_registry_credential,
  environment ->> 'Type' as environment_type
from
  aws_codebuild_build;
select
  id,
  json_extract(environment, '$.Certificate') as environment_certificate,
  json_extract(environment, '$.ComputeType') as environment_compute_type,
  json_extract(environment, '$.EnvironmentVariables') as environment_variables,
  json_extract(environment, '$.Image') as environment_image,
  json_extract(environment, '$.ImagePullCredentialsType') as environment_image_pull_credentials_type,
  json_extract(environment, '$.PrivilegedMode') as environment_privileged_mode,
  json_extract(environment, '$.RegistryCredential') as environment_registry_credential,
  json_extract(environment, '$.Type') as environment_type
from
  aws_codebuild_build;

Get log details of builds

Gain insights into the status and location of your build logs. This query is useful for identifying potential issues with log storage and accessibility, such as encryption status and bucket owner access.

select
  id,
  logs -> 'S3Logs' ->> 'Status' as s3_log_status,
  logs -> 'S3Logs' ->> 'Location' as s3_log_location,
  logs -> 'S3Logs' ->> 'BucketOwnerAccess' as s3_log_bucket_owner_access,
  logs -> 'S3Logs' ->> 'EncryptionDisabled' as s3_log_encryption_disabled,
  logs ->> 'DeepLink' as deep_link,
  logs ->> 'GroupName' as group_name,
  logs ->> 'S3LogsArn' as s3_logs_arn,
  logs ->> 'S3DeepLink' as s3_deep_link,
  logs ->> 'StreamName' as stream_name,
  logs ->> 'CloudWatchLogsArn' as cloud_watch_logs_arn,
  logs -> 'CloudWatchLogs' ->> 'Status' as cloud_watch_logs_status,
  logs -> 'CloudWatchLogs' ->> 'GroupName' as cloud_watch_logs_group_name,
  logs -> 'CloudWatchLogs' ->> 'StreamName' as cloud_watch_logs_stream_name
from
  aws_codebuild_build;
select
  id,
  json_extract(logs, '$.S3Logs.Status') as s3_log_status,
  json_extract(logs, '$.S3Logs.Location') as s3_log_location,
  json_extract(logs, '$.S3Logs.BucketOwnerAccess') as s3_log_bucket_owner_access,
  json_extract(logs, '$.S3Logs.EncryptionDisabled') as s3_log_encryption_disabled,
  json_extract(logs, '$.DeepLink') as deep_link,
  json_extract(logs, '$.GroupName') as group_name,
  json_extract(logs, '$.S3LogsArn') as s3_logs_arn,
  json_extract(logs, '$.S3DeepLink') as s3_deep_link,
  json_extract(logs, '$.StreamName') as stream_name,
  json_extract(logs, '$.CloudWatchLogsArn') as cloud_watch_logs_arn,
  json_extract(logs, '$

Get network interface details of builds

Explore the network configurations of your AWS CodeBuild projects. This allows you to assess the network interface and subnet details, which can be crucial for understanding your project's networking setup and troubleshooting connectivity issues.

select
  id,
  network_interfaces ->> 'NetworkInterfaceId' as network_interface_id,
  network_interfaces ->> 'SubnetId' as subnet_id,
from
  aws_codebuild_build;
select
  id,
  json_extract(network_interfaces, '$.NetworkInterfaceId') as network_interface_id,
  json_extract(network_interfaces, '$.SubnetId') as subnet_id
from
  aws_codebuild_build;

List phase details of builds

Explore the progress of your build processes by examining the start and end times, duration, and status of each phase. This can help you identify potential bottlenecks or inefficiencies in your build process.

select
  id,
  p ->> 'EndTime' as end_time,
  p ->> 'Contexts' as contexts,
  p ->> 'PhaseType' as phase_type,
  p ->> 'StartTime' as start_time,
  p ->> 'DurationInSeconds' as duration_in_seconds,
  p ->> 'PhaseStatus' as phase_status
from
  aws_codebuild_build,
  jsonb_array_elements(phases) as p;
select
  aws_codebuild_build.id,
  json_extract(p, '$.EndTime') as end_time,
  json_extract(p, '$.Contexts') as contexts,
  json_extract(p, '$.PhaseType') as phase_type,
  json_extract(p, '$.StartTime') as start_time,
  json_extract(p, '$.DurationInSeconds') as duration_in_seconds,
  json_extract(p, '$.PhaseStatus') as phase_status
from
  aws_codebuild_build,
  json_each(phases) as p;

Get source details of builds

Determine the areas in which the source details of various builds can be analyzed for security and performance. This is beneficial for understanding the build configurations and identifying potential areas of improvement.

select
  id,
  source ->> 'Auth' as source_auth,
  source ->> 'BuildStatusConfig' as source_BuildStatusConfig,
  source ->> 'Buildspec' as source_buildspec,
  source ->> 'GitCloneDepth' as source_git_clone_depth,
  source ->> 'GitSubmodulesConfig' as source_git_submodules_config,
  source ->> 'GitCloneDepth' as source_git_clone_depth,
  source ->> 'InsecureSsl' as source_insecure_ssl,
  source ->> 'Location' as source_location,
  source ->> 'ReportBuildStatus' as source_report_build_status,
  source ->> 'SourceIdentifier' as source_identifier,
  source ->> 'Type' as source_type
from
  aws_codebuild_build;
select
  id,
  json_extract(source, '$.Auth') as source_auth,
  json_extract(source, '$.BuildStatusConfig') as source_BuildStatusConfig,
  json_extract(source, '$.Buildspec') as source_buildspec,
  json_extract(source, '$.GitCloneDepth') as source_git_clone_depth,
  json_extract(source, '$.GitSubmodulesConfig') as source_git_submodules_config,
  json_extract(source, '$.GitCloneDepth') as source_git_clone_depth,
  json_extract(source, '$.InsecureSsl') as source_insecure_ssl,
  json_extract(source, '$.Location') as source_location,
  json_extract(source, '$.ReportBuildStatus') as source_report_build_status,
  json_extract(source, '$.SourceIdentifier') as source_identifier,
  json_extract(source, '$.Type') as source_type
from
  aws_codebuild_build;

List file system location details of builds

Explore the specific details of file system locations used in different builds. This can help in understanding the organization of builds and making improvements in the build process.

select
  id,
  f ->> 'Identifier' as file_system_identifier,
  f ->> 'Location' as file_system_location,
  f ->> 'MountOptions' as file_system_mount_options,
  f ->> 'MountPoint' as file_system_mount_point,
  f ->> 'Type' as file_system_type
from
  aws_codebuild_build,
  jsonb_array_elements(file_system_locations) as f;
select
  aws_codebuild_build.id,
  json_extract(f.value, '$.Identifier') as file_system_identifier,
  json_extract(f.value, '$.Location') as file_system_location,
  json_extract(f.value, '$.MountOptions') as file_system_mount_options,
  json_extract(f.value, '$.MountPoint') as file_system_mount_point,
  json_extract(f.value, '$.Type') as file_system_type
from
  aws_codebuild_build,
  json_each(file_system_locations) as f;
title description
Steampipe Table: aws_codebuild_project - Query AWS CodeBuild Projects using SQL
Allows users to query AWS CodeBuild Projects and retrieve comprehensive information about each project.

Table: aws_codebuild_project - Query AWS CodeBuild Projects using SQL

The AWS CodeBuild Project is a component of AWS CodeBuild, a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. It provides prepackaged build environments for popular programming languages and build tools, such as Apache Maven, Gradle, and more.

Table Usage Guide

The aws_codebuild_project table in Steampipe provides you with information about projects within AWS CodeBuild. This table allows you, as a DevOps engineer, to query project-specific details, including project ARN, creation date, project name, service role, and other associated metadata. You can utilize this table to gather insights on projects, such as the status of each project, the source code repository used, the build environment configuration, and more. The schema outlines the various attributes of the CodeBuild project for you, including the project ARN, creation date, last modified date, and associated tags.

Examples

Basic info

Explore the features and settings of your AWS CodeBuild projects to better understand their configuration, such as encryption details, build limits, and regional distribution. This can help in assessing project performance, security, and operational efficiency.

select
  name,
  description,
  encryption_key,
  concurrent_build_limit,
  source_version,
  service_role,
  created,
  last_modified,
  region
from
  aws_codebuild_project;
select
  name,
  description,
  encryption_key,
  concurrent_build_limit,
  source_version,
  service_role,
  created,
  last_modified,
  region
from
  aws_codebuild_project;

Get the build input details for each project

Determine the areas in which each project's build input details are configured, such as authorization, build status, and source location. This can help in managing and troubleshooting the build process in AWS CodeBuild projects.

select
  name,
  source_version,
  source ->> 'Auth' as auth,
  source ->> 'BuildStatusConfig' as build_status_config,
  source ->> 'Buildspec' as build_spec,
  source ->> 'GitCloneDepth' as git_clone_depth,
  source ->> 'GitSubmodulesConfig' as git_submodules_config,
  source ->> 'InsecureSsl' as insecure_ssl,
  source ->> 'Location' as location,
  source ->> 'ReportBuildStatus' as report_build_status,
  source ->> 'SourceIdentifier' as source_identifier,
  source ->> 'Type' as type
from
  aws_codebuild_project;
select
  name,
  source_version,
  json_extract(source, '$.Auth') as auth,
  json_extract(source, '$.BuildStatusConfig') as build_status_config,
  json_extract(source, '$.Buildspec') as build_spec,
  json_extract(source, '$.GitCloneDepth') as git_clone_depth,
  json_extract(source, '$.GitSubmodulesConfig') as git_submodules_config,
  json_extract(source, '$.InsecureSsl') as insecure_ssl,
  json_extract(source, '$.Location') as location,
  json_extract(source, '$.ReportBuildStatus') as report_build_status,
  json_extract(source, '$.SourceIdentifier') as source_identifier,
  json_extract(source, '$.Type') as type
from
  aws_codebuild_project;

List projects which are not created within a VPC

Determine the areas in which AWS CodeBuild projects have been created without a Virtual Private Cloud (VPC) configuration. This is useful for identifying potential security risks and ensuring all projects follow best practices for network security.

select
  name,
  description,
  vpc_config
from
  aws_codebuild_project
where
  vpc_config is null;
select
  name,
  description,
  vpc_config
from
  aws_codebuild_project
where
  vpc_config is null;

List projects that do not have logging enabled

Identify projects that have disabled logging, allowing you to pinpoint areas where crucial data might not be being recorded for future analysis. This is particularly useful for maintaining project transparency and troubleshooting potential issues.

select
  name,
  description,
  logs_config -> 'CloudWatchLogs' ->> 'Status' as cloud_watch_logs_status,
  logs_config -> 'S3Logs' ->> 'Status' as s3_logs_status
from
  aws_codebuild_project
where
  logs_config -> 'CloudWatchLogs' ->> 'Status' = 'DISABLED'
  and logs_config -> 'S3Logs' ->> 'Status' = 'DISABLED';
select
  name,
  description,
  json_extract(logs_config, '$.CloudWatchLogs.Status') as cloud_watch_logs_status,
  json_extract(logs_config, '$.S3Logs.Status') as s3_logs_status
from
  aws_codebuild_project
where
  json_extract(logs_config, '$.CloudWatchLogs.Status') = 'DISABLED'
  and json_extract(logs_config, '$.S3Logs.Status') = 'DISABLED';

List private build projects

Determine the areas in which your AWS CodeBuild projects are set to private, allowing you to gain insights into your project visibility settings and understand where potential privacy concerns may arise.

select
  name,
  arn,
  project_visibility
from
  aws_codebuild_project
where
  project_visibility = 'PRIVATE';
select
  name,
  arn,
  project_visibility
from
  aws_codebuild_project
where
  project_visibility = 'PRIVATE';
title description
Steampipe Table: aws_codebuild_source_credential - Query AWS CodeBuild Source Credentials using SQL
Allows users to query AWS CodeBuild Source Credentials

Table: aws_codebuild_source_credential - Query AWS CodeBuild Source Credentials using SQL

The AWS CodeBuild Source Credentials are used to interact with external code repositories. They store the authentication information required to access private repositories in GitHub, BitBucket, and AWS CodeCommit. This feature enables secure connection to these repositories, allowing AWS CodeBuild to read the source code for build operations.

Table Usage Guide

The aws_codebuild_source_credential table in Steampipe provides you with information about source credentials within AWS CodeBuild. This table allows you as a DevOps engineer to query specific details about source credentials, including the ARN, server type, authentication type, and token. You can utilize this table to gather insights on source credentials, such as identifying the server types, verifying the authentication types, and more. The schema outlines the various attributes of the source credential for you, including the ARN, server type, authentication type, and token.

Examples

Basic info

Determine the areas in which authentication types and server types are used across different regions. This can provide useful insights for managing and optimizing the use of AWS CodeBuild source credentials.

select
  arn,
  server_type,
  auth_type,
  region
from
  aws_codebuild_source_credential;
select
  arn,
  server_type,
  auth_type,
  region
from
  aws_codebuild_source_credential;

List projects using OAuth to access GitHub source repository

This query helps identify projects that are utilizing OAuth for accessing GitHub as their source repository. This could be useful for auditing purposes, ensuring the correct authorization method is being used for accessing code repositories.

select
  p.arn as project_arn,
  p.source ->> 'Location' as source_repository, 
  p.source ->> 'Type' as source_repository_type,
  c.auth_type as authorization_type
from
  aws_codebuild_project as p
  join aws_codebuild_source_credential as c on (p.region = c.region and p.source ->> 'Type' = c.server_type)
where
  p.source ->> 'Type' = 'GITHUB'
  and c.auth_type = 'OAUTH';
select
  p.arn as project_arn,
  json_extract(p.source, '$.Location') as source_repository, 
  json_extract(p.source, '$.Type') as source_repository_type,
  c.auth_type as authorization_type
from
  aws_codebuild_project as p
  join aws_codebuild_source_credential as c on (p.region = c.region and json_extract(p.source, '$.Type') = c.server_type)
where
  json_extract(p.source, '$.Type') = 'GITHUB'
  and c.auth_type = 'OAUTH';
title description
Steampipe Table: aws_codecommit_repository - Query AWS CodeCommit Repositories using SQL
Allows users to query AWS CodeCommit repositories and retrieve data such as repository name, ARN, description, clone URL, last modified date, and other related details.

Table: aws_codecommit_repository - Query AWS CodeCommit Repositories using SQL

The AWS CodeCommit Repository is a fully-managed source control service that hosts secure Git-based repositories. It makes it easy for teams to collaborate on code in a secure and highly scalable ecosystem. CodeCommit eliminates the need to operate your own source control system or worry about scaling its infrastructure.

Table Usage Guide

The aws_codecommit_repository table in Steampipe provides you with information about repositories within AWS CodeCommit. This table allows you, as a DevOps engineer, to query repository-specific details, including repository name, ARN, description, clone URL, last modified date, and other related details. You can utilize this table to gather insights on repositories, such as repositories with specific ARNs, the last modified date of repositories, verification of clone URLs, and more. The schema outlines the various attributes of the CodeCommit repository for you, including the repository name, ARN, clone URL, and associated metadata.

Examples

Basic info

This query allows you to explore the details of your AWS CodeCommit repositories, including their names, IDs, creation dates, and regions. It's useful for gaining insights into your repository usage and organization across different regions.

select
  repository_name,
  repository_id,
  arn,
  creation_date,
  region
from
  aws_codecommit_repository;
select
  repository_name,
  repository_id,
  arn,
  creation_date,
  region
from
  aws_codecommit_repository;
title description
Steampipe Table: aws_codedeploy_app - Query AWS CodeDeploy Applications using SQL
Allows users to query AWS CodeDeploy Applications to return detailed information about each application, including application name, ID, and associated deployment groups.

Table: aws_codedeploy_app - Query AWS CodeDeploy Applications using SQL

The AWS CodeDeploy service automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. An Application in AWS CodeDeploy is a name that uniquely identifies the application you want to deploy. AWS CodeDeploy uses this name, which functions like a container, to ensure the correct combination of revision, deployment configuration, and deployment group are referenced during a deployment.

Table Usage Guide

The aws_codedeploy_app table in Steampipe provides you with information about applications within AWS CodeDeploy. This table allows you, as a DevOps engineer, to query application-specific details, including application name, compute platform, and linked deployment groups. You can utilize this table to gather insights on applications, such as their deployment configurations, linked deployment groups, and compute platforms. The schema outlines the various attributes of the CodeDeploy application for you, including the application name, application ID, and the linked deployment groups.

Examples

Basic info

Explore the deployment applications in your AWS environment to understand their creation time and associated computing platform. This is beneficial for tracking the history and configuration of your applications across different regions.

select
  arn,
  application_id,
  application_name
  compute_platform,
  create_time,
  region
from
  aws_codedeploy_app;
select
  arn,
  application_id,
  application_name,
  compute_platform,
  create_time,
  region
from
  aws_codedeploy_app;

Get total applications deployed on each platform

Explore the distribution of applications across various platforms to better understand your deployment strategy. This can assist in identifying platforms that are heavily utilized for deploying applications, aiding in resource allocation and management decisions.

select
  count(arn) as application_count,
  compute_platform
from
  aws_codedeploy_app
group by
  compute_platform;
select
  count(arn) as application_count,
  compute_platform
from
  aws_codedeploy_app
group by
  compute_platform;

List applications linked to GitHub

Identify instances where applications are linked to GitHub within the AWS CodeDeploy service. This is useful for gaining insights into the integration between your applications and GitHub, which can help in managing and troubleshooting your deployment processes.

select
  arn,
  application_id,
  compute_platform,
  create_time,
  github_account_name
from
  aws_codedeploy_app
where
  linked_to_github;
select
  arn,
  application_id,
  compute_platform,
  create_time,
  github_account_name
from
  aws_codedeploy_app
where
  linked_to_github = 1;
title description
Steampipe Table: aws_codedeploy_deployment_config - Query AWS CodeDeploy Deployment Configurations using SQL
Allows users to query AWS CodeDeploy Deployment Configurations to retrieve information about the deployment configurations within AWS CodeDeploy service.

Table: aws_codedeploy_deployment_config - Query AWS CodeDeploy Deployment Configurations using SQL

The AWS CodeDeploy Deployment Configurations is a feature of AWS CodeDeploy, a service that automates code deployments to any instance, including Amazon EC2 instances and servers hosted on-premise. Deployment configurations specify deployment rules and success/failure conditions used by AWS CodeDeploy when pushing out new application versions. This enables you to have a consistent, repeatable process for releasing new software, eliminating the complexity of updating applications and systems.

Table Usage Guide

The aws_codedeploy_deployment_config table in Steampipe provides you with information about deployment configurations within AWS CodeDeploy. This table allows you as a DevOps engineer, developer, or system administrator to query deployment configuration details, including deployment configuration names, minimum healthy hosts, and compute platform. You can utilize this table to gather insights on configurations, such as those with specific compute platforms, minimum healthy host requirements, and more. The schema outlines the various attributes of the deployment configuration for you, including the deployment configuration ID, deployment configuration name, and the compute platform.

Examples

Basic info

Explore various configurations of your AWS CodeDeploy deployments to understand their compute platforms, creation times, and regions. This can help you manage and optimize your deployments effectively.

select
  arn,
  deployment_config_id,
  deployment_config_name,
  compute_platform,
  create_time,
  region
from
  aws_codedeploy_deployment_config;
select
  arn,
  deployment_config_id,
  deployment_config_name,
  compute_platform,
  create_time,
  region
from
  aws_codedeploy_deployment_config;

Get the configuration count for each compute platform

This query helps you understand the distribution of configurations across different compute platforms in your AWS CodeDeploy service. It's useful for gaining insights into how your deployment configurations are spread across different platforms, aiding in resource allocation and strategic planning.

select
  count(arn) as configuration_count,
  compute_platform
from
  aws_codedeploy_deployment_config
group by
  compute_platform;
select
  count(arn) as configuration_count,
  compute_platform
from
  aws_codedeploy_deployment_config
group by
  compute_platform;

List the user managed deployment configurations

Determine the areas in which user-managed deployment configurations have been set up. This is useful to understand where and when specific computing platforms were established, providing insights into the regional distribution and timeline of your deployment configurations.

select
  arn,
  deployment_config_id,
  deployment_config_name
  compute_platform,
  create_time,
  region
from
  aws_codedeploy_deployment_config
where
  create_time is not null;
select
  arn,
  deployment_config_id,
  deployment_config_name,
  compute_platform,
  create_time,
  region
from
  aws_codedeploy_deployment_config
where
  create_time is not null;

List the minimum healthy hosts required by each deployment configuration

Discover the segments that require the least number of healthy hosts for each deployment configuration. This can be useful in optimizing resource allocation and ensuring efficient application deployment.

select
  arn,
  deployment_config_id,
  deployment_config_name
  compute_platform,
  minimum_healthy_hosts ->> 'Type' as host_type,
  minimum_healthy_hosts ->> 'Value' as host_value,
  region
from
  aws_codedeploy_deployment_config
where
  create_time is not null;
select
  arn,
  deployment_config_id,
  deployment_config_name,
  compute_platform,
  json_extract(minimum_healthy_hosts, '$.Type') as host_type,
  json_extract(minimum_healthy_hosts, '$.Value') as host_value,
  region
from
  aws_codedeploy_deployment_config
where
  create_time is not null;

Get traffic routing details for TimeBasedCanary deployment configurations

Determine the areas in which your AWS CodeDeploy configurations are utilizing TimeBasedCanary deployments. This can be useful for understanding how traffic is managed during deployments, and to assess the percentage and intervals of traffic being directed to your new service versions.

select
  arn,
  deployment_config_id,
  deployment_config_name,
  traffic_routing_config -> 'TimeBasedCanary' ->> 'CanaryInterval' as canary_interval,
  traffic_routing_config -> 'TimeBasedCanary' ->> 'CanaryPercentage' as canary_percentage
from
  aws_codedeploy_deployment_config
where
  traffic_routing_config ->> 'Type' = 'TimeBasedCanary';
select
  arn,
  deployment_config_id,
  deployment_config_name,
  json_extract(traffic_routing_config, '$.TimeBasedCanary.CanaryInterval') as canary_interval,
  json_extract(traffic_routing_config, '$.TimeBasedCanary.CanaryPercentage') as canary_percentage
from
  aws_codedeploy_deployment_config
where
  json_extract(traffic_routing_config, '$.Type') = 'TimeBasedCanary';

Get traffic routing details for TimeBasedLinear deployment configurations

Explore the intricacies of traffic routing for deployments using a 'TimeBasedLinear' configuration. This allows you to understand the rate of change over time, helping to optimize deployment strategies.

select
  arn,
  deployment_config_id,
  deployment_config_name,
  traffic_routing_config -> 'TimeBasedLinear' ->> 'LinearInterval' as linear_interval,
  traffic_routing_config -> 'TimeBasedLinear' ->> 'LinearPercentage' as linear_percentage
from
  aws_codedeploy_deployment_config
where
  traffic_routing_config ->> 'Type' = 'TimeBasedLinear';
select
  arn,
  deployment_config_id,
  deployment_config_name,
  json_extract(traffic_routing_config, '$.TimeBasedLinear.LinearInterval') as linear_interval,
  json_extract(traffic_routing_config, '$.TimeBasedLinear.LinearPercentage') as linear_percentage
from
  aws_codedeploy_deployment_config
where
  json_extract(traffic_routing_config, '$.Type') = 'TimeBasedLinear';
title description
Steampipe Table: aws_codedeploy_deployment_group - Query AWS CodeDeploy Deployment Groups using SQL
Allows users to query AWS CodeDeploy Deployment Group details including deployment configurations, target revisions, and associated alarm configurations.

Table: aws_codedeploy_deployment_group - Query AWS CodeDeploy Deployment Groups using SQL

The AWS CodeDeploy Deployment Group is a set of individual instances, CodeDeploy Lambda deployment configuration settings, or an EC2 tag set. It is used to represent a deployment's target, be it an instance, a Lambda function, or an EC2 instance. The Deployment Group is a key component of the AWS CodeDeploy service, which automates code deployments to any instance, including Amazon EC2 instances and servers running on-premise.

Table Usage Guide

The aws_codedeploy_deployment_group table in Steampipe provides you with information about deployment groups within AWS CodeDeploy. This table allows you as a DevOps engineer to query deployment group-specific details, including deployment configurations, target revisions, and associated alarm configurations. You can utilize this table to gather insights on deployment groups, such as deployment configuration names, target revisions, and alarm configurations. The schema outlines the various attributes of the deployment group for you, including the deployment group name, service role ARN, deployment configuration name, target revision, and associated alarm configurations.

Examples

Basic info

Explore which deployment groups are active in your AWS CodeDeploy application, including their deployment style and region. This can help identify any inconsistencies or areas for optimization in deployment strategies.

select
  arn,
  deployment_group_id,
  deployment_group_name,
  application_name,
  deployment_style,
  region
from
  aws_codedeploy_deployment_group;
select
  arn,
  deployment_group_id,
  deployment_group_name,
  application_name,
  deployment_style,
  region
from
  aws_codedeploy_deployment_group;

Get total deployment groups on each platform

Determine the total number of deployment groups across each computing platform. This can provide insights into the distribution of resources and help in effective resource management.

select
  count(arn) as group_count,
  compute_platform
from
  aws_codedeploy_deployment_group
group by
  compute_platform;
select
  count(arn) as group_count,
  compute_platform
from
  aws_codedeploy_deployment_group
group by
  compute_platform;

List the last successful deployment for each deployment group

Determine the status of your most recent successful deployments across different deployment groups. This can help you track your deployment history and identify any potential issues or bottlenecks in your deployment process.

select
  arn,
  deployment_group_id,
  last_successful_deployment
from
  aws_codedeploy_deployment_group;
select
  arn,
  deployment_group_id,
  last_successful_deployment
from
  aws_codedeploy_deployment_group;

Get total deployment groups based on deployment style

Analyze your deployment styles to understand the distribution of your deployment groups. This can help optimize resource allocation and improve deployment efficiency.

select
  count(arn) as group_count,
  deployment_style
from
  aws_codedeploy_deployment_group
group by
  deployment_style;
select
  count(arn) as group_count,
  deployment_style
from
  aws_codedeploy_deployment_group
group by
  deployment_style;

List the deployment groups having automatic rollback enabled

Determine the areas in which automatic rollback is enabled for deployment groups. This is useful to quickly identify configurations that can help prevent unintended changes or disruptions to services.

select
  arn,
  deployment_group_id,
  deployment_group_name,
  auto_rollback_configuration ->> 'Enabled' as auto_rollback_configuration_enabled
from
  aws_codedeploy_deployment_group
where
  auto_rollback_configuration ->> 'Enabled' = 'true';
select
  arn,
  deployment_group_id,
  deployment_group_name,
  json_extract(auto_rollback_configuration, '$.Enabled') as auto_rollback_configuration_enabled
from
  aws_codedeploy_deployment_group
where
  json_extract(auto_rollback_configuration, '$.Enabled') = 'true';

List all autoscaling groups in a particular deployment group for an application

Analyze the settings to understand the configuration of autoscaling groups within a specific deployment group for a particular application. This can be useful in managing and optimizing resource usage in your cloud environment.

select
  arn as group_arn,
  deployment_group_id,
  deployment_group_name,
  auto_scaling_groups ->> 'Hook' as auto_scaling_group_hook,
  auto_scaling_groups ->> 'Name' as auto_scaling_group_name
from
  aws_codedeploy_deployment_group
where
  application_name = 'abc'
  and deployment_group_name = 'def';
select
  arn as group_arn,
  deployment_group_id,
  deployment_group_name,
  json_extract(auto_scaling_groups, '$.Hook') as auto_scaling_group_hook,
  json_extract(auto_scaling_groups, '$.Name') as auto_scaling_group_name
from
  aws_codedeploy_deployment_group
where
  application_name = 'abc'
  and deployment_group_name = 'def';

List the deployment groups having automatic rollback enabled

Determine the areas in which automatic rollback is enabled in deployment groups. This is useful to identify and manage risk in software deployment processes.

select
  arn,
  deployment_group_id,
  deployment_group_name,
  alarm_configuration ->> 'Enabled' as alarm_configuration_enabled
from
  aws_codedeploy_deployment_group
where
  alarm_configuration ->> 'Enabled' = 'true';
select
  arn,
  deployment_group_id,
  deployment_group_name,
  json_extract(alarm_configuration, '$.Enabled') as alarm_configuration_enabled
from
  aws_codedeploy_deployment_group
where
  json_extract(alarm_configuration, '$.Enabled') = 'true';
title description
Steampipe Table: aws_codepipeline_pipeline - Query AWS CodePipeline Pipeline using SQL
Allows users to query AWS CodePipeline Pipeline data, including pipeline names, statuses, stages, and associated metadata.

Table: aws_codepipeline_pipeline - Query AWS CodePipeline Pipeline using SQL

The AWS CodePipeline is a fully managed continuous delivery service that helps you automate your release pipelines for fast and reliable application and infrastructure updates. CodePipeline automates the build, test, and deploy phases of your release process every time there is a code change, based on the release model you define. This enables you to rapidly and reliably deliver features and updates.

Table Usage Guide

The aws_codepipeline_pipeline table in Steampipe provides you with information about pipelines within AWS CodePipeline. This table allows you, as a DevOps engineer, to query pipeline-specific details, including pipeline names, statuses, stages, and associated metadata. You can utilize this table to gather insights on pipelines, such as pipeline execution history, pipeline settings, and more. The schema outlines the various attributes of the pipeline for you, including the pipeline ARN, creation date, stages, and associated tags.

Examples

Basic info

Discover the segments that are part of the AWS CodePipeline service. This information can be useful for auditing, tracking resource usage, and understanding your overall AWS environment.

select
  name,
  arn,
  tags_src,
  region,
  account_id
from
  aws_codepipeline_pipeline;
select
  name,
  arn,
  tags_src,
  region,
  account_id
from
  aws_codepipeline_pipeline;

List unencrypted pipelines

Discover the segments that have unencrypted pipelines in the AWS CodePipeline service to enhance your security measures. This helps in identifying potential security risks and taking necessary actions to protect your data.

select
  name,
  arn,
  encryption_key
from
  aws_codepipeline_pipeline
where
  encryption_key is null;
select
  name,
  arn,
  encryption_key
from
  aws_codepipeline_pipeline
where
  encryption_key is null;
title description
Steampipe Table: aws_codestar_notification_rule - Query AWS CodeStar notification rules using SQL
Allows users to query CodeStar notification rules in the AWS Developer Tools to retrieve information about notification rules.

Table: aws_codestar_notification_rule - Query AWS CodeStar notification rules using SQL

The AWS CodeStar notification rules allow you to set up notifications for the AWS Developer Tools, including AWS CodePipeline and AWS CodeBuild, to various destinations including AWS SNS and AWS Chatbot.

Table Usage Guide

The aws_codestar_notification_rule table in Steampipe provides you with information about notification rules. This table allows you, as a DevOps engineer, to query notification rule details, including the notification rule ARN, status, level of detail, enabled event types, as well as the ARN of the resource producing notifications and the notification targets. You can use this table to gather insights on notification rules, and combine it with other tables such as aws_codepipeline_pipeline to check notification rules are set up consistently.

Examples

Basic info

Review the configured rules and their status.

select
  name,
  resource,
  detail_type,
  status
from
  aws_codestar_notification_rule;
select
  name,
  resource,
  detail_type,
  status
from
  aws_codestar_notification_rule;

Identify which CI/CD pipelines have notification rules

Determine which AWS CodePipeline pipelines do or do not have associated notification rules.

select
  pipeline.name as pipeline,
  notification_rule.name notification_rule,
  notification_rule.status
from
  aws_codepipeline_pipeline as pipeline
  left join aws_codestar_notification_rule as notification_rule on pipeline.arn = notification_rule.resource;
select
  pipeline.name as pipeline,
  notification_rule.name as notification_rule,
  notification_rule.status
from
  aws_codepipeline_pipeline as pipeline
  left join aws_codestar_notification_rule as notification_rule on pipeline.arn = notification_rule.resource;

Check for notification rules with no targets

Determine which notification rules lack targets. This query uses PostgreSQL's JSON querying capabilities to count the number of targets configured.

select
  name
from
  aws_codestar_notification_rule
where
  jsonb_array_length(targets) = 0;
select
  name
from
  aws_codestar_notification_rule
where
  json_array_length(targets) = 0;

Name the SNS topics associated with notification rules

Determine which AWS SNS topics the notification rules are targeting. This query uses PostgreSQL's JSON querying capabilities to join on the notification rule targets. Note that due to the cross join, this query will not list notification rules that don't have any targets.

select
  notification_rule.name as notification_rule,
  target ->> 'TargetType' as target_type,
  topic.title as target_topic
from
  aws_codestar_notification_rule as notification_rule cross
  join jsonb_array_elements(notification_rule.targets) as target
  left join aws_sns_topic as topic on target ->> 'TargetAddress' = topic.topic_arn;
select
  notification_rule.name as notification_rule,
  json_extract(target.value, '$.TargetType') as target_type,
  topic.title as target_topic
from
  aws_codestar_notification_rule as notification_rule
  cross join json_each(notification_rule.targets) as target
  left join aws_sns_topic as topic on json_extract(target.value, '$.TargetAddress') = topic.topic_arn;

Using CTE to retain notification rules without targets

By using a Common Table Expression (with query), it is possible to join on targets without discarding notification rules that don't have any targets.

with rule_target as (
  select
    arn,
    target ->> 'TargetAddress' as target_address,
    target ->> 'TargetStatus' as target_status,
    target ->> 'TargetType' as target_type
  from
    aws_codestar_notification_rule cross
    join jsonb_array_elements(targets) as target
)
select
  notification_rule.name as notification_rule,
  rule_target.target_type,
  topic.title as target_topic
from
  aws_codestar_notification_rule as notification_rule
  left join rule_target on rule_target.arn = notification_rule.arn
  left join aws_sns_topic as topic on rule_target.target_address = topic.topic_arn;
with rule_target as (
  select
    notification_rule.arn,
    json_extract(target.value, '$.TargetAddress') as target_address,
    json_extract(target.value, '$.TargetStatus') as target_status,
    json_extract(target.value, '$.TargetType') as target_type
  from
    aws_codestar_notification_rule as notification_rule
    cross join json_each(notification_rule.targets) as target
)
select
  notification_rule.name as notification_rule,
  rule_target.target_type,
  topic.title as target_topic
from
  aws_codestar_notification_rule as notification_rule
  left join rule_target on rule_target.arn = notification_rule.arn
  left join aws_sns_topic as topic on rule_target.target_address = topic.topic_arn;
title description
Steampipe Table: aws_cognito_identity_pool - Query AWS Cognito Identity Pools using SQL
Allows users to query AWS Cognito Identity Pools and retrieve detailed information about each identity pool, including its configuration and associated roles.

Table: aws_cognito_identity_pool - Query AWS Cognito Identity Pools using SQL

The AWS Cognito Identity Pool is a service that provides temporary AWS credentials for users who you authenticate (federated users), or for users who are authenticated by a public login provider. These identity pools define which user attributes and attribute mappings to use when users sign in. It allows you to create unique identities for your users and federate them with identity providers.

Table Usage Guide

The aws_cognito_identity_pool table in Steampipe provides you with information about identity pools within AWS Cognito. This table enables you, as a DevOps engineer, to query identity pool-specific details, including its ID, ARN, configuration, and associated roles. You can utilize this table to gather insights on identity pools, such as their authentication providers, supported logins, and whether unauthenticated logins are allowed. The schema outlines the various attributes of the identity pool for you, including the identity pool ID, ARN, creation date, last modified date, and associated tags.

Examples

Basic info

Explore which AWS Cognito identity pools are associated with your account and gain insights into their regional distribution. This information can help you manage your AWS resources effectively and understand your usage patterns across different regions.

select
  identity_pool_id,
  identity_pool_name,
  tags,
  region,
  account_id
from
  aws_cognito_identity_pool;
select
  identity_pool_id,
  identity_pool_name,
  tags,
  region,
  account_id
from
  aws_cognito_identity_pool;

List identity pools with classic flow enabled

Determine the areas in which classic flow is enabled within identity pools to assess potential security risks.

select
  identity_pool_id,
  identity_pool_name,
  allow_classic_flow
from
  aws_cognito_identity_pool
where
  allow_classic_flow;
select
  identity_pool_id,
  identity_pool_name,
  allow_classic_flow
from
  aws_cognito_identity_pool
where
  allow_classic_flow = 1;

List identity pools that allow unauthenticated identites

Determine the areas in which identity pools allow unauthenticated identities, helping to identify potential security risks.

select
  identity_pool_id,
  identity_pool_name,
  allow_classic_flow
from
  aws_cognito_identity_pool
where
  allow_unauthenticated_identities;
select
  identity_pool_id,
  identity_pool_name,
  allow_classic_flow
from
  aws_cognito_identity_pool
where
  allow_unauthenticated_identities = 1;

Get the identity provider details for a particular identity pool

Explore the specifics of a particular identity provider by examining its client and provider names, as well as its server-side token status. This is useful for assessing the configuration of your identity pool and ensuring it aligns with your security and usage requirements.

select
  identity_pool_id,
  identity_pool_name,
  allow_classic_flow,
  cognito_identity_providers ->> 'ClientId' as identity_provider_client_id,
  cognito_identity_providers ->> 'ProviderName' as identity_provider_name,
  cognito_identity_providers ->> 'ServerSideTokenCheck' as server_side_token_enabled
from
  aws_cognito_identity_pool
where
  identity_pool_id = 'eu-west-3:e96205bf-1ef2-4fe6-a748-65e948673960';
select
  identity_pool_id,
  identity_pool_name,
  allow_classic_flow,
  json_extract(cognito_identity_providers, '$.ClientId') as identity_provider_client_id,
  json_extract(cognito_identity_providers, '$.ProviderName') as identity_provider_name,
  json_extract(cognito_identity_providers, '$.ServerSideTokenCheck') as server_side_token_enabled
from
  aws_cognito_identity_pool
where
  identity_pool_id = 'eu-west-3:e96205bf-1ef2-4fe6-a748-65e948673960';
title description
Steampipe Table: aws_cognito_identity_provider - Query AWS Cognito Identity Providers using SQL
Allows users to query AWS Cognito Identity Providers, providing essential details about the identity provider configurations within AWS Cognito User Pools.

Table: aws_cognito_identity_provider - Query AWS Cognito Identity Providers using SQL

The AWS Cognito Identity Provider is a feature of Amazon Cognito, a service that provides authentication, authorization, and user management for your web and mobile apps. It allows you to easily integrate third-party identity providers with your Cognito User Pools, enabling users to sign in using their existing social or enterprise identities. This simplifies the sign-in process for your users and can help increase engagement.

Table Usage Guide

The aws_cognito_identity_provider table in Steampipe provides you with information about the identity provider configurations within AWS Cognito User Pools. This table allows you, as a DevOps engineer, security analyst, or developer, to query provider-specific details, including the provider name, type, attributes mapping, and associated metadata. You can utilize this table to gather insights on identity providers, such as understanding the identity providers linked to user pools, verifying attribute mappings, and more. The schema outlines the various attributes of the identity provider for you, including the provider name, creation date, user pool id, and attribute mapping.

Examples

Basic info

Explore which identity providers are associated with a specific user pool in a certain region and account of AWS Cognito service. This can be useful to understand the configuration of identity providers for managing user authentication and access control.

select
  provider_name,
  user_pool_id,
  region,
  account_id
from
  aws_cognito_identity_provider
where
  user_pool_id = 'us-east-1_012345678';
select
  provider_name,
  user_pool_id,
  region,
  account_id
from
  aws_cognito_identity_provider
where
  user_pool_id = 'us-east-1_012345678';

Show details of Google identity providers of a user pool

Discover the segments that pertain to Google as an identity provider within a specified user pool. This can help in understanding the association between the user pool and Google, aiding in user management and access control.

select
  provider_name,
  user_pool_id,
  provider_details
from
  aws_cognito_identity_provider
where
  provider_type = 'Google'
  and user_pool_id = 'us-east-1_012345678';
select
  provider_name,
  user_pool_id,
  provider_details
from
  aws_cognito_identity_provider
where
  provider_type = 'Google'
  and user_pool_id = 'us-east-1_012345678';
title description
Steampipe Table: aws_cognito_user_pool - Query AWS Cognito User Pools using SQL
Allows users to query AWS Cognito User Pools to fetch detailed information about each user pool, including the pool's configuration, status, and associated metadata.

Table: aws_cognito_user_pool - Query AWS Cognito User Pools using SQL

The AWS Cognito User Pool is a user directory in Amazon Cognito. With a user pool, you can manage user directories, and let users sign in through Amazon Cognito or federate them through a social identity provider. This service also provides features for security, compliance, and user engagement.

Table Usage Guide

The aws_cognito_user_pool table in Steampipe provides you with information about User Pools within AWS Cognito. This table allows you, as a DevOps engineer, to query user pool-specific details, including the pool's configuration, status, and associated metadata. You can utilize this table to gather insights on user pools, such as the pool's creation and last modified dates, password policies, MFA and SMS configuration, and more. The schema outlines the various attributes of the user pool for you, including the pool ID, ARN, name, status, and associated tags.

Examples

Basic info

Explore which user pools are set up in your AWS Cognito service, allowing you to understand the distribution across different regions and accounts. This can be useful for managing access and assessing the overall configuration of your user authentication system.

select
  id,
  name,
  arn,
  tags,
  region,
  account_id
from
  aws_cognito_user_pool;
select
  id,
  name,
  arn,
  tags,
  region,
  account_id
from
  aws_cognito_user_pool;

List user pools with MFA enabled

Determine the areas in which multi-factor authentication is enabled for user pools, aiding in the assessment of security measures within your AWS Cognito service.

select
  name,
  arn,
  mfa_configuration
from
  aws_cognito_user_pool
where
  mfa_configuration != 'OFF';
select
  name,
  arn,
  mfa_configuration
from
  aws_cognito_user_pool
where
  mfa_configuration != 'OFF';
title description
Steampipe Table: aws_config_aggregate_authorization - Query AWS Config Aggregate Authorizations using SQL
Allows users to query AWS Config Aggregate Authorizations, providing vital information about AWS Config rules and their respective authorizations in an aggregated form.

Table: aws_config_aggregate_authorization - Query AWS Config Aggregate Authorizations using SQL

The AWS Config Aggregate Authorization is a feature of AWS Config that allows you to authorize the aggregator account to collect AWS Config data from source accounts. It simplifies compliance auditing by enabling you to collect configuration and compliance data across multiple accounts and regions, and aggregate it into a central account. This centralized data can then be accessed using SQL queries for analysis and reporting.

Table Usage Guide

The aws_config_aggregate_authorization table in Steampipe provides you with information about AWS Config Aggregate Authorizations. This table allows you, as a DevOps engineer, to query authorization-specific details, including the account ID and region that are allowed to aggregate AWS Config rules. You can utilize this table to gather insights on AWS Config Aggregate Authorizations, such as the permissions and trust policies associated with each authorization, the AWS account that has been granted the authorization, and more. The schema outlines the various attributes of the AWS Config Aggregate Authorization for you, including the account ID, region, and associated ARN.

Examples

Basic info

Discover the segments that are authorized to access your AWS configuration data, including the region and account details. This can help you manage access control and understand when these authorizations were created.

select
  arn,
  authorized_account_id,
  authorized_aws_region,
  creation_time
from
  aws_config_aggregate_authorization;
select
  arn,
  authorized_account_id,
  authorized_aws_region,
  creation_time
from
  aws_config_aggregate_authorization;
title description
Steampipe Table: aws_config_configuration_recorder - Query AWS Config Configuration Recorder using SQL
Allows users to query AWS Config Configuration Recorder

Table: aws_config_configuration_recorder - Query AWS Config Configuration Recorder using SQL

The AWS Config Configuration Recorder is a feature that enables you to record the resource configurations in your AWS account. It captures and tracks changes to the configuration of your AWS resources, allowing you to assess, audit, and evaluate the configurations of your AWS resources. This helps ensure that your resource configurations are in compliance with your organization's policies and best practices.

Table Usage Guide

The aws_config_configuration_recorder table in Steampipe provides you with information about Configuration Recorders within AWS Config. This table allows you, as a DevOps engineer, security analyst, or cloud administrator, to query configuration recorder-specific details, including its current status, associated role ARN, and whether it is recording all resource types. You can utilize this table to gather insights on configuration recorders, such as which resources are being recorded, the recording status, and more. The schema outlines the various attributes of the Configuration Recorder for you, including the name, role ARN, resource types, and recording group.

Examples

Basic info

Explore which AWS configuration recorders are active and recording, to better understand and manage your AWS resources and their configurations. This can be particularly useful for auditing, compliance, and operational troubleshooting purposes.

select
  name,
  role_arn,
  status,
  recording_group,
  status_recording,
  akas,
  title
from
  aws_config_configuration_recorder;
select
  name,
  role_arn,
  status,
  recording_group,
  status_recording,
  akas,
  title
from
  aws_config_configuration_recorder;

List configuration recorders that are not recording

Discover segments of configuration recorders that are currently inactive. This is beneficial in identifying potential gaps in your AWS Config setup, ensuring all necessary configuration changes are being tracked.

select
  name,
  role_arn,
  status_recording,
  title
from
  aws_config_configuration_recorder
where
  not status_recording;
select
  name,
  role_arn,
  status_recording,
  title
from
  aws_config_configuration_recorder
where
  status_recording != 1;

List configuration recorders with failed deliveries

Discover the segments that have experienced delivery failures in AWS Configuration Recorder. This is beneficial for identifying and resolving issues in the system to ensure smooth operations.

select
  name,
  status ->> 'LastStatus' as last_status,
  status ->> 'LastStatusChangeTime' as last_status_change_time,
  status ->> 'LastErrorCode' as last_error_code,
  status ->> 'LastErrorMessage' as last_error_message
from
  aws_config_configuration_recorder
where
  status ->> 'LastStatus' = 'FAILURE';
select
  name,
  json_extract(status, '$.LastStatus') as last_status,
  json_extract(status, '$.LastStatusChangeTime') as last_status_change_time,
  json_extract(status, '$.LastErrorCode') as last_error_code,
  json_extract(status, '$.LastErrorMessage') as last_error_message
from
  aws_config_configuration_recorder
where
  json_extract(status, '$.LastStatus') = 'FAILURE';
title description
Steampipe Table: aws_config_conformance_pack - Query AWS Config Conformance Packs using SQL
Allows users to query AWS Config Conformance Packs to fetch information about the AWS Config conformance packs deployed on an AWS account.

Table: aws_config_conformance_pack - Query AWS Config Conformance Packs using SQL

The AWS Config Conformance Pack is a collection of AWS Config rules and remediation actions that can be easily deployed as a single entity in an account and a region. These packs can be used to create a common baseline of security, compliance, or operational best practices across multiple accounts in your organization. AWS Config continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations.

Table Usage Guide

The aws_config_conformance_pack table in Steampipe provides you with information about AWS Config conformance packs within the AWS Config service. This table allows you, as a DevOps engineer, to query conformance pack-specific details, including pack names, delivery S3 bucket, and associated metadata. You can utilize this table to gather insights on conformance packs, such as pack ARN, creation time, last update requested time, input parameters, and more. The schema outlines the various attributes of the conformance pack for you, including the pack ARN, delivery S3 bucket, input parameters, and associated tags.

Examples

Basic info

Explore the general information about AWS Config Conformance Packs, such as who created them and when they were last updated. This can help understand the management and status of these resources in your AWS environment.

select
  name,
  conformance_pack_id,
  created_by,
  last_update_requested_time,
  title,
  akas
from
  aws_config_conformance_pack;
select
  name,
  conformance_pack_id,
  created_by,
  last_update_requested_time,
  title,
  akas
from
  aws_config_conformance_pack;

Get S3 bucket info for each conformance pack

Explore which conformance packs are associated with each S3 bucket. This can help streamline and improve the management of AWS configurations.

select
  name,
  conformance_pack_id,
  delivery_s3_bucket,
  delivery_s3_key_prefix
from
  aws_config_conformance_pack;
select
  name,
  conformance_pack_id,
  delivery_s3_bucket,
  delivery_s3_key_prefix
from
  aws_config_conformance_pack;

Get input parameter details of each conformance pack

Determine the settings of each conformance pack in your AWS Config service. This helps in understanding how each pack is configured and can assist in identifying any discrepancies or areas for optimization.

select
  name,
  inp ->> 'ParameterName' as parameter_name,
  inp ->> 'ParameterValue' as parameter_value,
  title,
  akas
from
  aws_config_conformance_pack,
  jsonb_array_elements(input_parameters) as inp;
select
  aws_config_conformance_pack.name,
  json_extract(inp.value, '$.ParameterName') as parameter_name,
  json_extract(inp.value, '$.ParameterValue') as parameter_value,
  title,
  akas
from
  aws_config_conformance_pack,
  json_each(input_parameters) as inp;
title description
Steampipe Table: aws_config_retention_configuration - Query AWS Config Retention Configuration using SQL
Allows users to query AWS Config Retention Configuration for information about the retention period that AWS Config uses to retain your configuration items.

Table: aws_config_retention_configuration - Query AWS Config Retention Configuration using SQL

The AWS Config Retention Configuration is a feature within the AWS Config service that allows you to specify the retention period (in days) for your configuration items. This helps in managing the volume of historical configuration items and reducing storage costs. AWS Config automatically deletes configuration items older than the specified retention period.

Table Usage Guide

The aws_config_retention_configuration table in Steampipe provides you with information about the retention period that AWS Config uses to retain your configuration items. This table allows you, as a DevOps engineer, to query retention period details, including the number of days AWS Config retains the configuration items and whether the retention is permanent. You can utilize this table to gather insights on the retention configurations, such as the duration of retention and whether the retention is set to be permanent. The schema outlines the various attributes of the retention configuration for you, including the name of the retention period and the retention period in days.

Examples

Basic info

Explore which AWS Config retention configurations are active and determine the areas in which they are applied. This can help assess the elements within your AWS environment that have specific retention periods for configuration items, facilitating efficient resource management and compliance monitoring.

select
  name,
  retention_period_in_days,
  title,
  region
from
  aws_config_retention_configuration;
select
  name,
  retention_period_in_days,
  title,
  region
from
  aws_config_retention_configuration;

List retention configuration with the retention period less than 1 year

Discover the segments that have a retention period of less than a year in the AWS configuration. This can be useful to identify and review any potentially risky settings where data might not be retained long enough for compliance or auditing purposes.

select
  name,
  retention_period_in_days,
  title
from
  aws_config_retention_configuration
where
  retention_period_in_days < 356;
select
  name,
  retention_period_in_days,
  title
from
  aws_config_retention_configuration
where
  retention_period_in_days < 356;

List retention configuration by region

Discover the segments that have specific retention configurations in a particular region. This can help in understanding how long configuration data is retained and can aid in better compliance management.

select
  name,
  retention_period_in_days,
  title,
  region
from
  aws_config_retention_configuration
where
  region = 'us-east-1';
select
  name,
  retention_period_in_days,
  title,
  region
from
  aws_config_retention_configuration
where
  region = 'us-east-1';

List retention configuration settings of config recorders

Determine the areas in which retention settings of configuration recorders are applied, allowing you to understand how long your AWS Config data is retained in different regions.

select
  c.title as configuration_recorder,
  r.name as retention_configuration_name,
  r.retention_period_in_days,
  r.region
from
  aws_config_retention_configuration as r
  left join aws_config_configuration_recorder as c
on
  r.region = c.region;
select
  c.title as configuration_recorder,
  r.name as retention_configuration_name,
  r.retention_period_in_days,
  r.region
from
  aws_config_retention_configuration as r
  left join aws_config_configuration_recorder as c
on
  r.region = c.region;
title description
Steampipe Table: aws_config_rule - Query AWS Config Rules using SQL
Allows users to query Config Rules in AWS Config service. It provides information about each Config Rule, including its name, ARN, description, scope, and compliance status.

Table: aws_config_rule - Query AWS Config Rules using SQL

AWS Config Rules is a service that enables you to automate the evaluation of recorded configurations against the desired configurations. With Config Rules, you can review changes to configurations and relationships between AWS resources, dive into detailed resource configuration histories, and determine your overall compliance against the configurations specified in your internal guidelines. This enables you to simplify compliance auditing, security analysis, change management, and operational troubleshooting.

Table Usage Guide

The aws_config_rule table in Steampipe provides you with information about Config Rules within the AWS Config service. This table allows you, as a DevOps engineer, to query rule-specific details, including the rule name, ARN, description, scope, and compliance status. You can utilize this table to gather insights on Config Rules, such as rules that are non-compliant, rules applied to specific resources, and more. The schema outlines the various attributes of the Config Rule for you, including the rule ARN, creation date, input parameters, and associated tags.

Examples

Basic info

Explore which AWS configuration rules are in place to gain insights into the current security and compliance state of your AWS resources. This can help identify potential areas of risk or non-compliance.

select
  name,
  rule_id,
  arn,
  rule_state,
  created_by,
  scope
from
  aws_config_rule;
select
  name,
  rule_id,
  arn,
  rule_state,
  created_by,
  scope
from
  aws_config_rule;

List inactive rules

Discover the segments that consist of inactive rules within your AWS configuration to help identify potential areas for optimization or deletion. This could be useful in maintaining a clean and efficient system by removing or updating unused elements.

select
  name,
  rule_id,
  arn,
  rule_state
from
  aws_config_rule
where
  rule_state <> 'ACTIVE';
select
  name,
  rule_id,
  arn,
  rule_state
from
  aws_config_rule
where
  rule_state != 'ACTIVE';

List active rules for S3 buckets

Discover the segments that contain active rules for your S3 buckets to better manage and monitor your AWS resources. This is particularly useful for ensuring compliance and security within your cloud storage environment.

select
  name,
  rule_id,
  tags
from
  aws_config_rule
where
  name Like '%s3-bucket%';
select
  name,
  rule_id,
  tags
from
  aws_config_rule
where
  name Like '%s3-bucket%';

List complaince details by config rule

Determine the compliance status of a specific AWS Config rule. This is useful to ensure that your AWS resources are following the set rules for approved Amazon Machine Images (AMIs), thereby maintaining a secure and compliant environment.

select
  jsonb_pretty(compliance_by_config_rule) as compliance_info
from
  aws_config_rule
where
  name = 'approved-amis-by-id';
select
  compliance_by_config_rule
from
  aws_config_rule
where
  name = 'approved-amis-by-id';

List complaince types by config rule

Determine the areas in which your AWS configuration rules are compliant or non-compliant. This can help you identify potential issues and ensure your configurations align with best practices.

select
  name as config_rule_name,
  compliance_status -> 'Compliance' -> 'ComplianceType' as compliance_type
from
  aws_config_rule,
  jsonb_array_elements(compliance_by_config_rule) as compliance_status;
select
  name as config_rule_name,
  json_extract(compliance_status.value, '$.Compliance.ComplianceType') as compliance_type
from
  aws_config_rule,
  json_each(compliance_by_config_rule) as compliance_status;

List config rules that run in proactive mode

Identify instances where configuration rules are set to operate in proactive mode, which allows for continuous monitoring and automated compliance checks of your system.

select
  name as config_rule_name,
  c ->> 'Mode' as evaluation_mode
from
  aws_config_rule,
  jsonb_array_elements(evaluation_modes) as c
where
  c ->> 'Mode' = 'PROACTIVE';
select
  name as config_rule_name,
  json_extract(c.value, '$.Mode') as evaluation_mode
from
  aws_config_rule,
  json_each(evaluation_modes) as c
where
  json_extract(c.value, '$.Mode') = 'PROACTIVE';
title description
Steampipe Table: aws_cost_by_account_daily - Query AWS Cost Explorer using SQL
Allows users to query daily AWS costs by account. This table provides an overview of AWS usage and cost data for each AWS account on a daily basis.

Table: aws_cost_by_account_daily - Query AWS Cost Explorer using SQL

The AWS Cost Explorer is a service that allows you to visualize, understand, and manage your AWS costs and usage over time. It provides detailed information about your costs and usage, including both AWS service usage and the costs associated with your usage. You can use Cost Explorer to identify trends, pinpoint cost drivers, and detect anomalies.

Table Usage Guide

The aws_cost_by_account_daily table in Steampipe provides you with information about your daily AWS costs for each of your accounts within AWS Cost Explorer. This table allows you, as a financial analyst, cloud economist, or DevOps engineer, to query daily cost-specific details, including cost usage, unblended costs, and associated metadata. You can utilize this table to gather insights on your daily AWS spending, such as cost trends, cost spikes, and cost predictions. The schema outlines the various attributes of your daily cost, including your linked account, service, currency code, and cost usage details.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_by_account_daily table provides you with a simplified view of cost for your account (or all linked accounts when run against the organization master), summarized by day, for the last year.

Important Notes

Examples

Basic info

This example allows users to gain insights into their daily AWS cost by account. It's useful for tracking and analyzing cost trends over time, helping to manage and optimize cloud spending.

select
  linked_account_id,
  period_start,
  blended_cost_amount::numeric::money,
  unblended_cost_amount::numeric::money,
  amortized_cost_amount::numeric::money,
  net_unblended_cost_amount::numeric::money,
  net_amortized_cost_amount::numeric::money
from 
  aws_cost_by_account_daily
order by
  linked_account_id,
  period_start;
select
  linked_account_id,
  period_start,
  CAST(blended_cost_amount AS REAL) AS blended_cost_amount,
  CAST(unblended_cost_amount AS REAL) AS unblended_cost_amount,
  CAST(amortized_cost_amount AS REAL) AS amortized_cost_amount,
  CAST(net_unblended_cost_amount AS REAL) AS net_unblended_cost_amount,
  CAST(net_amortized_cost_amount AS REAL) AS net_amortized_cost_amount
from 
  aws_cost_by_account_daily
order by
  linked_account_id,
  period_start;

Min, Max, and average daily unblended_cost_amount by account

Analyze your AWS accounts to understand the minimum, maximum, and average daily costs. This is useful for monitoring the financial performance of different accounts and identifying potential areas for cost optimization.

select
  linked_account_id,
  min(unblended_cost_amount)::numeric::money as min,
  max(unblended_cost_amount)::numeric::money as max,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_account_daily
group by
  linked_account_id
order by
  linked_account_id;
select
  linked_account_id,
  min(unblended_cost_amount) as min,
  max(unblended_cost_amount) as max,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_account_daily
group by
  linked_account_id
order by
  linked_account_id;

Ranked - Top 10 Most expensive days (unblended_cost_amount) by account

Explore the days where the cost was at its highest for each account. This query is useful for identifying potential anomalies or trends in spending, enabling more effective financial management.

with ranked_costs as (
  select
    linked_account_id,
    period_start,
    unblended_cost_amount::numeric::money,
    rank() over(partition by linked_account_id order by unblended_cost_amount desc)
  from 
    aws_cost_by_account_daily
)
select * from ranked_costs where rank <= 10;
Error: SQLite does not support the rank window function.
title description
Steampipe Table: aws_cost_by_account_monthly - Query AWS Cost Explorer Service using SQL
Allows users to query monthly AWS costs per account. It provides cost details for each AWS account, allowing users to monitor and manage their AWS spending.

Table: aws_cost_by_account_monthly - Query AWS Cost Explorer Service using SQL

The AWS Cost Explorer Service provides insights into your AWS costs and usage. It enables you to visualize, understand, and manage your AWS costs and usage over time. You can use it to query your monthly AWS costs by account using SQL.

Table Usage Guide

The aws_cost_by_account_monthly table in Steampipe provides you with information about your monthly AWS costs per account. This table allows you, as a financial analyst or DevOps engineer, to query cost-specific details, including the total amount spent, the currency code, and the associated AWS account. You can utilize this table to gain insights on your AWS spending and to manage your budget more effectively. The schema outlines the various attributes of your AWS cost, including the account ID, the month, the total amount, and the currency code.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_by_account_monthly table provides a simplified view of cost for your account (or all linked accounts when run against the organization master), summarized by month, for the last year.

Important Notes

Examples

Basic info

This query allows you to analyze the monthly costs associated with each linked account on AWS. It helps in understanding the financial impact of different accounts and provides insights for better cost management.

select
  linked_account_id,
  period_start,
  blended_cost_amount::numeric::money,
  unblended_cost_amount::numeric::money,
  amortized_cost_amount::numeric::money,
  net_unblended_cost_amount::numeric::money,
  net_amortized_cost_amount::numeric::money
from 
  aws_cost_by_account_monthly
order by
  linked_account_id,
  period_start;
select
  linked_account_id,
  period_start,
  CAST(blended_cost_amount AS REAL) AS blended_cost_amount,
  CAST(unblended_cost_amount AS REAL) AS unblended_cost_amount,
  CAST(amortized_cost_amount AS REAL) AS amortized_cost_amount,
  CAST(net_unblended_cost_amount AS REAL) AS net_unblended_cost_amount,
  CAST(net_amortized_cost_amount AS REAL) AS net_amortized_cost_amount
from 
  aws_cost_by_account_monthly
order by
  linked_account_id,
  period_start;

Min, Max, and average monthly unblended_cost_amount by account

Analyze your AWS accounts' monthly expenditure to identify the minimum, maximum, and average costs. This information can help in budgeting and managing your cloud expenses more effectively.

select
  linked_account_id,
  min(unblended_cost_amount)::numeric::money as min,
  max(unblended_cost_amount)::numeric::money as max,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_account_monthly
group by
  linked_account_id
order by
  linked_account_id;
select
  linked_account_id,
  min(unblended_cost_amount) as min,
  max(unblended_cost_amount) as max,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_account_monthly
group by
  linked_account_id
order by
  linked_account_id;

Ranked - Most expensive months (unblended_cost_amount) by account

Analyze your spending patterns by identifying the months with the highest costs for each linked AWS account. This can help manage your budget by highlighting periods of increased expenditure.

select
  linked_account_id,
  period_start,
  unblended_cost_amount::numeric::money,
  rank() over(partition by linked_account_id order by unblended_cost_amount desc)
from 
  aws_cost_by_account_monthly;
Error: SQLite does not support the rank window function.

Month on month growth (unblended_cost_amount) by account

This query is designed to analyze monthly expenditure trends across different accounts. It helps users identify any significant changes in costs, which can be useful for budgeting and cost management purposes.

with cost_data as (
  select
    linked_account_id,
    period_start,
    unblended_cost_amount as this_month,
    lag(unblended_cost_amount,-1) over(partition by linked_account_id order by period_start desc) as previous_month
  from 
    aws_cost_by_account_monthly
)
select
    linked_account_id,
    period_start,
    this_month::numeric::money,
    previous_month::numeric::money,
    round((100 * ( (this_month - previous_month) / previous_month))::numeric, 2) as percent_change
from
  cost_data
order by
  linked_account_id,
  period_start;
with cost_data as (
  select
    linked_account_id,
    period_start,
    unblended_cost_amount as this_month,
    lag(unblended_cost_amount, -1) over(partition by linked_account_id order by period_start desc) as previous_month
  from 
    aws_cost_by_account_monthly
)
select
    linked_account_id,
    period_start,
    this_month,
    previous_month,
    round(100 * (this_month - previous_month) / previous_month, 2) as percent_change
from
  cost_data
order by
  linked_account_id;
title description
Steampipe Table: aws_cost_by_record_type_daily - Query AWS Cost and Usage Report using SQL
Allows users to query daily AWS cost data by record type. This table provides information about AWS costs incurred per record type on a daily basis.

Table: aws_cost_by_record_type_daily - Query AWS Cost and Usage Report using SQL

The AWS Cost and Usage Report is a comprehensive resource that provides detailed information about your AWS costs. It allows you to view your AWS usage and costs for each service category used by your accounts and by specific cost allocation tags. By querying this report, you can gain insights into your AWS spending and optimize your resource utilization.

Table Usage Guide

The aws_cost_by_record_type_daily table in Steampipe provides you with information about AWS costs incurred per record type on a daily basis. This table allows you as a financial analyst, DevOps engineer, or other professional to query cost-specific details, including the linked account, service, usage type, and operation. You can utilize this table to gather insights on cost distribution, such as costs associated with different services, usage types, and operations. The schema outlines the various attributes of the cost record, including the record id, record type, billing period start date, and cost.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_by_record_type_daily table provides a simplified view of cost for your account (or all linked accounts when run against the organization master) as per record types (fees, usage, costs, tax refunds, and credits), summarized by day, for the last year.

Important Notes

Examples

Basic info

Determine the areas in which your AWS account incurs costs on a daily basis. This query helps you understand your spending patterns by breaking down costs into different categories, allowing you to manage your AWS resources more efficiently.

select
  linked_account_id,
  record_type,
  period_start,
  blended_cost_amount::numeric::money,
  unblended_cost_amount::numeric::money,
  amortized_cost_amount::numeric::money,
  net_unblended_cost_amount::numeric::money,
  net_amortized_cost_amount::numeric::money
from 
  aws_cost_by_record_type_daily
order by
  linked_account_id,
  period_start;
select
  linked_account_id,
  record_type,
  period_start,
  CAST(blended_cost_amount AS REAL) AS blended_cost_amount,
  CAST(unblended_cost_amount AS REAL) AS unblended_cost_amount,
  CAST(amortized_cost_amount AS REAL) AS amortized_cost_amount,
  CAST(net_unblended_cost_amount AS REAL) AS net_unblended_cost_amount,
  CAST(net_amortized_cost_amount AS REAL) AS net_amortized_cost_amount
from 
  aws_cost_by_record_type_daily
order by
  linked_account_id,
  period_start;

Min, Max, and average daily unblended_cost_amount by account and record type

Determine the areas in which you have minimum, maximum, and average daily costs associated with different accounts and record types. This can help you identify potential cost-saving opportunities and better manage your resources.

select
  linked_account_id,
  record_type,
  min(unblended_cost_amount)::numeric::money as min,
  max(unblended_cost_amount)::numeric::money as max,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_record_type_daily
group by
  linked_account_id,
  record_type
order by
  linked_account_id;
select
  linked_account_id,
  record_type,
  min(unblended_cost_amount) as min,
  max(unblended_cost_amount) as max,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_record_type_daily
group by
  linked_account_id,
  record_type
order by
  linked_account_id;

Ranked - Top 10 Most expensive days (unblended_cost_amount) by account and record type

Determine the days with the highest expenses, grouped by account and record type. This query can help in cost optimization by identifying the top 10 most expensive days, allowing for better budget management and resource allocation.

with ranked_costs as (
  select
    linked_account_id,
    record_type,
    period_start,
    unblended_cost_amount::numeric::money,
    rank() over(partition by linked_account_id, record_type order by unblended_cost_amount desc)
  from 
    aws_cost_by_record_type_daily
)
select * from ranked_costs where rank <= 10;
Error: SQLite does not support the rank window function.
title description
Steampipe Table: aws_cost_by_record_type_monthly - Query AWS Cost and Usage Report Records using SQL
Allows users to query AWS Cost and Usage Report Records on a monthly basis.

Table: aws_cost_by_record_type_monthly - Query AWS Cost and Usage Report Records using SQL

The AWS Cost and Usage Report service provides comprehensive cost and usage data about your AWS resources, enabling you to manage your costs and optimize your AWS spend. It records the AWS usage data for your accounts and delivers the log files to a specified Amazon S3 bucket. You can query these records using SQL to gain insights into your resource usage and cost.

Table Usage Guide

The aws_cost_by_record_type_monthly table in Steampipe provides you with information about AWS Cost and Usage Report Records, specifically detailing costs incurred by different record types on a monthly basis. This table allows you, whether you're a DevOps engineer or a financial analyst, to query cost-specific details, including service usage, cost allocation, and associated metadata. You can utilize this table to gather insights on AWS costs, such as costs associated with specific AWS services, cost trends over time, and cost allocation across different record types. The schema outlines the various attributes of the cost and usage report record, including the record type, usage type, operation, and cost.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_by_record_type_monthly table provides a simplified view of cost for your account (or all linked accounts when run against the organization master) as per record types (fees, usage, costs, tax refunds, and credits), summarized by month, for the last year.

Important Notes

Examples

Basic info

Gain insights into your AWS cost trends by analyzing monthly expenses. This query helps in understanding the cost incurred over time, aiding in effective budget planning and cost management.

select
  linked_account_id,
  record_type,
  period_start,
  blended_cost_amount::numeric::money,
  unblended_cost_amount::numeric::money,
  amortized_cost_amount::numeric::money,
  net_unblended_cost_amount::numeric::money,
  net_amortized_cost_amount::numeric::money
from 
  aws_cost_by_record_type_monthly
order by
  linked_account_id,
  period_start;
select
  linked_account_id,
  record_type,
  period_start,
  CAST(blended_cost_amount AS REAL) AS blended_cost_amount,
  CAST(unblended_cost_amount AS REAL) AS unblended_cost_amount,
  CAST(amortized_cost_amount AS REAL) AS amortized_cost_amount,
  CAST(net_unblended_cost_amount AS REAL) AS net_unblended_cost_amount,
  CAST(net_amortized_cost_amount AS REAL) AS net_amortized_cost_amount
from 
  aws_cost_by_record_type_monthly
order by
  linked_account_id,
  period_start;

Min, Max, and average monthly unblended_cost_amount by account and record type

Explore which linked accounts have the highest, lowest, and average monthly costs, grouped by record type. This can help in understanding the cost distribution and identifying any unusual spending patterns.

select
  linked_account_id,
  record_type,
  min(unblended_cost_amount)::numeric::money as min,
  max(unblended_cost_amount)::numeric::money as max,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_record_type_monthly
group by
  linked_account_id,
  record_type
order by
  linked_account_id;
select
  linked_account_id,
  record_type,
  min(unblended_cost_amount) as min,
  max(unblended_cost_amount) as max,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_record_type_monthly
group by
  linked_account_id,
  record_type
order by
  linked_account_id;

Ranked - Most expensive months (unblended_cost_amount) by account and record type

Explore which months have been the most costly for each account and record type. This can aid in identifying trends and planning future budgeting strategies.

select
  linked_account_id,
  record_type,
  period_start,
  unblended_cost_amount::numeric::money,
  rank() over(partition by linked_account_id, record_type order by unblended_cost_amount desc)
from 
  aws_cost_by_record_type_monthly;
select
  linked_account_id,
  record_type,
  period_start,
  unblended_cost_amount,
  (
    select count(*) + 1 
    from aws_cost_by_record_type_monthly as b
    where 
      a.linked_account_id = b.linked_account_id and 
      a.record_type = b.record_type and 
      a.unblended_cost_amount < b.unblended_cost_amount
  )
from 
  aws_cost_by_record_type_monthly as a;
title description
Steampipe Table: aws_cost_by_service_daily - Query AWS Cost Explorer using SQL
Allows users to query AWS Cost Explorer to retrieve daily cost breakdown by AWS service.

Table: aws_cost_by_service_daily - Query AWS Cost Explorer using SQL

The AWS Cost Explorer is a tool that allows you to visualize, understand, and manage your AWS costs and usage over time. It provides data about your cost drivers and usage trends, and enables you to drill down into your cost data to identify specific cost allocation tags or accounts in your organization. You can use it to track your daily AWS costs by service, making it easier to manage your AWS spending.

Table Usage Guide

The aws_cost_by_service_daily table in Steampipe provides you with information about the daily cost breakdown by AWS service within AWS Cost Explorer. This table allows you, as a financial analyst or cloud administrator, to query cost-specific details, including total cost, unit, and service name on a daily basis. You can utilize this table to track your spending on AWS services, monitor cost trends, and identify potential cost-saving opportunities. The schema outlines the various attributes of your cost data, including your linked account, service, currency, and amount.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_by_service_daily table provides you with a simplified view of cost for services in your account (or all linked accounts when run against the organization master), summarized by day, for the last year.

Important Notes

Examples

Basic info

Explore your daily AWS costs by service over a period of time. This query helps you track and analyze your expenditure, aiding in better financial management and budget planning.

select
  service,
  period_start,
  blended_cost_amount::numeric::money,
  unblended_cost_amount::numeric::money,
  amortized_cost_amount::numeric::money,
  net_unblended_cost_amount::numeric::money,
  net_amortized_cost_amount::numeric::money
from 
  aws_cost_by_service_daily
order by
  service,
  period_start;
select
  service,
  period_start,
  cast(blended_cost_amount as decimal),
  cast(unblended_cost_amount as decimal),
  cast(amortized_cost_amount as decimal),
  cast(net_unblended_cost_amount as decimal),
  cast(net_amortized_cost_amount as decimal)
from 
  aws_cost_by_service_daily
order by
  service,
  period_start;

Min, Max, and average daily unblended_cost_amount by service

This query is useful for gaining insights into the range and average of daily costs associated with different services in AWS. It can assist in identifying areas of high expenditure and evaluating cost efficiency.

select
  service,
  min(unblended_cost_amount)::numeric::money as min,
  max(unblended_cost_amount)::numeric::money as max,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_daily
group by
  service
order by
  service;
select
  service,
  min(unblended_cost_amount) as min,
  max(unblended_cost_amount) as max,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_daily
group by
  service
order by
  service;

Top 10 most expensive service (by average daily unblended_cost_amount)

Discover the segments that are driving your AWS costs by identifying the top 10 most expensive services based on their average daily costs. This helps in managing your budget more effectively and strategically allocating resources.

select
  service,
  sum(unblended_cost_amount)::numeric::money as sum,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_daily
group by
  service
order by
  average desc
limit 10;
select
  service,
  sum(unblended_cost_amount) as sum,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_daily
group by
  service
order by
  average desc
limit 10;

Top 10 most expensive service (by total daily unblended_cost_amount)

Determine the areas in which your AWS services are costing the most by identifying the top 10 services with the highest daily costs. This can help in optimizing resources and budgeting by focusing on the most expensive services.

select
  service,
  sum(unblended_cost_amount)::numeric::money as sum,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_daily
group by
  service
order by
  sum desc
limit 10;
select
  service,
  sum(unblended_cost_amount) as sum,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_daily
group by
  service
order by
  sum desc
limit 10;

Ranked - Top 10 Most expensive days (unblended_cost_amount) by service

This query is used to identify the top 10 days with the highest expenses for each service. This information can be helpful in managing budgets and identifying potential cost-saving opportunities.

with ranked_costs as (
  select
    service,
    period_start,
    unblended_cost_amount::numeric::money,
    rank() over(partition by service order by unblended_cost_amount desc)
  from 
    aws_cost_by_service_daily
)
select * from ranked_costs where rank <= 10;
Error: SQLite does not support the rank window function.
title description
Steampipe Table: aws_cost_by_service_monthly - Query AWS Cost Explorer Service using SQL
Allows users to query AWS Cost Explorer Service for monthly cost breakdown by service. This table provides details such as the service name, the cost associated with it, and the currency code.

Table: aws_cost_by_service_monthly - Query AWS Cost Explorer Service using SQL

The AWS Cost Explorer Service provides detailed information about your AWS costs, enabling you to analyze your costs and usage over time. You can use it to identify trends, isolate cost drivers, and detect anomalies. With SQL queries, you can retrieve monthly cost data specific to each AWS service.

Table Usage Guide

The aws_cost_by_service_monthly table in Steampipe provides you with information about the monthly cost breakdown by service within AWS Cost Explorer. This table allows you, as a financial analyst, DevOps engineer, or other stakeholder, to query cost-specific details, including the service name, the cost associated with it, and the currency code. You can utilize this table to gather insights on cost management, such as tracking AWS expenses, identifying cost trends, and auditing. The schema outlines the various attributes of the cost information, including the service name, cost, and currency code.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_by_service_monthly table provides you with a simplified view of cost for services in your account (or all linked accounts when run against the organization master), summarized by month, for the last year.

Important Notes

Examples

Basic info

Explore which AWS services have the highest costs over time. This query is useful in identifying potential areas for cost reduction through service optimization or consolidation.

select
  service,
  period_start,
  blended_cost_amount::numeric::money,
  unblended_cost_amount::numeric::money,
  amortized_cost_amount::numeric::money,
  net_unblended_cost_amount::numeric::money,
  net_amortized_cost_amount::numeric::money
from 
  aws_cost_by_service_monthly
order by
  service,
  period_start;
select
  service,
  period_start,
  cast(blended_cost_amount as decimal),
  cast(unblended_cost_amount as decimal),
  cast(amortized_cost_amount as decimal),
  cast(net_unblended_cost_amount as decimal),
  cast(net_amortized_cost_amount as decimal)
from 
  aws_cost_by_service_monthly
order by
  service,
  period_start;

Min, Max, and average monthly unblended_cost_amount by service

Explore which AWS services have the lowest, highest, and average monthly costs, providing a clear understanding of your AWS expenditure. This can help in budgeting and identifying services that may be costing more than expected.

select
  service,
  min(unblended_cost_amount)::numeric::money as min,
  max(unblended_cost_amount)::numeric::money as max,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_monthly
group by
  service
order by
  service;
select
  service,
  min(unblended_cost_amount) as min,
  max(unblended_cost_amount) as max,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_monthly
group by
  service
order by
  service;

Top 10 most expensive service (by average monthly unblended_cost_amount)

Discover the segments that are incurring the highest average monthly costs on your AWS account. This information can be crucial for budgeting and cost management strategies, helping you to identify areas where expenses can be reduced.

select
  service,
  sum(unblended_cost_amount)::numeric::money as sum,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_monthly
group by
  service
order by
  average desc
limit 10;
select
  service,
  sum(unblended_cost_amount) as sum,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_monthly
group by
  service
order by
  average desc
limit 10;

Top 10 most expensive service (by total monthly unblended_cost_amount)

This query helps to pinpoint the top 10 most costly services in terms of total monthly unblended cost. It is useful for gaining insights into where the majority of your AWS costs are coming from, allowing for more informed budgeting and cost management decisions.

select
  service,
  sum(unblended_cost_amount)::numeric::money as sum,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_monthly
group by
  service
order by
  sum desc
limit 10;
select
  service,
  sum(unblended_cost_amount) as sum,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_monthly
group by
  service
order by
  sum desc
limit 10;

Ranked - Most expensive month (unblended_cost_amount) by service

This query is designed to identify the most costly month for each service in terms of unblended costs. It can be useful for budgeting and cost management, helping to highlight areas where expenses may be unexpectedly high.

with ranked_costs as (
  select
    service,
    period_start,
    unblended_cost_amount::numeric::money,
    rank() over(partition by service order by unblended_cost_amount desc)
  from 
    aws_cost_by_service_monthly
)
select * from ranked_costs where rank = 1;
Error: SQLite does not support the rank window function.

Month on month growth (unblended_cost_amount) by service

Analyze your AWS monthly costs to understand the percentage change in expenditure for each service. This could be useful for identifying trends, managing budgets, and making strategic decisions about resource allocation.

with cost_data as (
  select
    service,
    period_start,
    unblended_cost_amount as this_month,
    lag(unblended_cost_amount,-1) over(partition by service order by period_start desc) as previous_month
  from 
    aws_cost_by_service_monthly
)
select
    service,
    period_start,
    this_month::numeric::money,
    previous_month::numeric::money,
    case 
      when previous_month = 0 and this_month = 0  then 0
      when previous_month = 0 then 999
      else round((100 * ( (this_month - previous_month) / previous_month))::numeric, 2) 
    end as percent_change
from
  cost_data
order by
  service,
  period_start;
with cost_data as (
  select
    service,
    period_start,
    unblended_cost_amount as this_month,
    lag(unblended_cost_amount,-1) over(partition by service order by period_start desc) as previous_month
  from 
    aws_cost_by_service_monthly
)
select
    service,
    period_start,
    this_month,
    previous_month,
    case 
      when previous_month = 0 and this_month = 0  then 0
      when previous_month = 0 then 999
      else round((100 * ( (this_month - previous_month) / previous_month)), 2) 
    end as percent_change
from
  cost_data
order by
  service,
  period_start;
title description
Steampipe Table: aws_cost_by_service_usage_type_daily - Query AWS Cost Explorer Service usage type daily using SQL
Allows users to query AWS Cost Explorer Service daily usage type to fetch detailed data about AWS service usage and costs.

Table: aws_cost_by_service_usage_type_daily - Query AWS Cost Explorer Service usage type daily using SQL

The AWS Cost Explorer Service usage type daily is a feature of AWS Cost Management that provides detailed information about your AWS costs, allowing you to visualize, understand, and manage your AWS costs and usage over time. This service provides data about your cost and usage in both tabular and graphical formats, with the ability to customize views and organize data to reflect your needs. The daily usage type specifically provides a granular view of costs incurred daily for each AWS service used.

Table Usage Guide

The aws_cost_by_service_usage_type_daily table in Steampipe provides you with information about daily usage type and costs for each AWS service within AWS Cost Explorer. This table allows you, as a DevOps engineer, financial analyst, or cloud architect, to query daily-specific details, including usage amount, usage unit, and the corresponding service cost. You can utilize this table to gather insights on daily usage and costs, such as identifying high-cost services, tracking usage patterns, and managing your AWS expenses. The schema outlines the various attributes of the AWS service cost, including the service name, usage type, usage amount, usage start and end dates, and the unblended cost.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_by_service_usage_type_daily table provides you with a simplified view of cost for services in your account (or all linked accounts when run against the organization master), summarized by day, for the last year.

Important Notes

Examples

Basic info

Explore your daily AWS service usage and costs, sorted by service and the start of the period. This can help you understand and manage your AWS expenses more effectively.

select
  service,
  usage_type,
  period_start,
  blended_cost_amount::numeric::money,
  unblended_cost_amount::numeric::money,
  amortized_cost_amount::numeric::money,
  net_unblended_cost_amount::numeric::money,
  net_amortized_cost_amount::numeric::money
from 
  aws_cost_by_service_usage_type_daily
order by
  service,
  period_start;
select
  service,
  usage_type,
  period_start,
  CAST(blended_cost_amount AS NUMERIC) AS blended_cost_amount,
  CAST(unblended_cost_amount AS NUMERIC) AS unblended_cost_amount,
  CAST(amortized_cost_amount AS NUMERIC) AS amortized_cost_amount,
  CAST(net_unblended_cost_amount AS NUMERIC) AS net_unblended_cost_amount,
  CAST(net_amortized_cost_amount AS NUMERIC) AS net_amortized_cost_amount
from 
  aws_cost_by_service_usage_type_daily
order by
  service,
  period_start;

Min, Max, and average daily unblended_cost_amount by service and usage type

Analyze your daily AWS service usage to understand the minimum, maximum, and average costs associated with each type of usage. This allows for more effective budget management and identification of potential cost-saving opportunities.

select
  service,
  usage_type,
  min(unblended_cost_amount)::numeric::money as min,
  max(unblended_cost_amount)::numeric::money as max,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_usage_type_daily
group by
  service,
  usage_type
order by
  service,
  usage_type;
select
  service,
  usage_type,
  min(unblended_cost_amount) as min,
  max(unblended_cost_amount) as max,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_usage_type_daily
group by
  service,
  usage_type
order by
  service,
  usage_type;

Top 10 most expensive service usage type (by average daily unblended_cost_amount)

Discover the segments that incur the highest average daily costs in your AWS services. This can help you identify areas where budget adjustments or cost optimizations might be necessary.

select
  service,
  usage_type,
  sum(unblended_cost_amount)::numeric::money as sum,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_usage_type_daily
group by
  service,
  usage_type
order by
  average desc
limit 10;
select
  service,
  usage_type,
  sum(unblended_cost_amount) as sum,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_usage_type_daily
group by
  service,
  usage_type
order by
  average desc
limit 10;

Top 10 most expensive service usage type (by total daily unblended_cost_amount)

This query is used to analyze the most costly services in terms of daily usage. It helps in budget management by highlighting areas where costs are significantly high, thus aiding in cost optimization strategies.

select
  service,
  usage_type,
  sum(unblended_cost_amount)::numeric::money as sum,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_usage_type_daily
group by
  service,
  usage_type
order by
  sum desc
limit 10;
select
  service,
  usage_type,
  sum(unblended_cost_amount) as sum,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_usage_type_daily
group by
  service,
  usage_type
order by
  sum desc
limit 10;
title description
Steampipe Table: aws_cost_by_service_usage_type_monthly - Query AWS Cost Explorer Service using SQL
Allows users to query AWS Cost Explorer Service to get detailed cost data per service and usage type on a monthly basis.

Table: aws_cost_by_service_usage_type_monthly - Query AWS Cost Explorer Service using SQL

The AWS Cost Explorer Service is a tool that enables you to view and analyze your costs and usage. You can explore your AWS costs using an interface that lets you observe both your costs and usage patterns. It includes features that allow you to dive deeper into your cost and usage data to identify trends, pinpoint cost drivers, and detect anomalies.

Table Usage Guide

The aws_cost_by_service_usage_type_monthly table in Steampipe provides you with information about the monthly cost data per service and usage type within AWS Cost Explorer Service. This table allows you, as a financial analyst or cloud cost manager, to query detailed cost data, including the service name, usage type, cost, and currency. You can utilize this table to gather insights on monthly AWS costs, such as cost per service, cost per usage type, and the total monthly cost. The schema outlines the various attributes of the cost data, including the service name, usage type, cost, and the currency used.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_by_service_usage_type_monthly table provides you with a simplified view of cost for services in your account (or all linked accounts when run against the organization master), summarized by month, for the last year.

Important Notes

Examples

Basic info

This query provides a comprehensive overview of your AWS service usage, allowing you to understand your monthly costs. By analyzing the cost and usage patterns, you can identify areas for potential cost savings and optimize your AWS utilization.

select
  service,
  usage_type,
  period_start,
  blended_cost_amount::numeric::money,
  unblended_cost_amount::numeric::money,
  amortized_cost_amount::numeric::money,
  net_unblended_cost_amount::numeric::money,
  net_amortized_cost_amount::numeric::money
from 
  aws_cost_by_service_usage_type_monthly
order by
  service,
  period_start;
select
  service,
  usage_type,
  period_start,
  cast(blended_cost_amount as decimal),
  cast(unblended_cost_amount as decimal),
  cast(amortized_cost_amount as decimal),
  cast(net_unblended_cost_amount as decimal),
  cast(net_amortized_cost_amount as decimal)
from 
  aws_cost_by_service_usage_type_monthly
order by
  service,
  period_start;

Min, Max, and average monthly unblended_cost_amount by service and usage type

Gain insights into your AWS service usage by evaluating the minimum, maximum, and average monthly costs associated with each service and usage type. This helps in better understanding of your cloud spending patterns and can guide cost optimization efforts.

select
  service,
  usage_type,
  min(unblended_cost_amount)::numeric::money as min,
  max(unblended_cost_amount)::numeric::money as max,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_usage_type_monthly
group by
  service,
  usage_type
order by
  service,
  usage_type;
select
  service,
  usage_type,
  min(cast(unblended_cost_amount as numeric)) as min,
  max(cast(unblended_cost_amount as numeric)) as max,
  avg(cast(unblended_cost_amount as numeric)) as average
from 
  aws_cost_by_service_usage_type_monthly
group by
  service,
  usage_type
order by
  service,
  usage_type;

Top 10 most expensive service usage type (by average monthly unblended_cost_amount)

Explore which services and usage types are the most costly on average per month, allowing for targeted cost reduction efforts. This analysis can help prioritize areas for cost optimization within your AWS services.

select
  service,
  usage_type,
  sum(unblended_cost_amount)::numeric::money as sum,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_usage_type_monthly
group by
  service,
  usage_type
order by
  average desc
limit 10;
select
  service,
  usage_type,
  sum(unblended_cost_amount) as sum,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_usage_type_monthly
group by
  service,
  usage_type
order by
  average desc
limit 10;

Top 10 most expensive service usage type (by total monthly unblended_cost_amount)

Discover the segments that are contributing the most to your monthly AWS costs. This query helps in identifying the top 10 service usage types that are incurring the highest costs, allowing you to better manage and optimize your resource usage.

select
  service,
  usage_type,
  sum(unblended_cost_amount)::numeric::money as sum,
  avg(unblended_cost_amount)::numeric::money as average
from 
  aws_cost_by_service_usage_type_monthly
group by
  service,
  usage_type
order by
  sum desc
limit 10;
select
  service,
  usage_type,
  sum(unblended_cost_amount) as sum,
  avg(unblended_cost_amount) as average
from 
  aws_cost_by_service_usage_type_monthly
group by
  service,
  usage_type
order by
  sum desc
limit 10;
title description
Steampipe Table: aws_cost_by_tag - Query AWS Cost Explorer using SQL
Allows users to query AWS Cost Explorer to obtain cost allocation tags and associated costs.

Table: aws_cost_by_tag - Query AWS Cost Explorer using SQL

The AWS Cost Explorer is a tool that enables you to view and analyze your costs and usage. You can explore your AWS costs using an interface that allows you to break down costs by AWS service, linked account, tag, and many other dimensions. Through the AWS Cost Explorer API, you can directly access this data and use it to create your own cost management applications.

Table Usage Guide

The aws_cost_by_tag table in Steampipe provides you with information about cost allocation tags and associated costs within AWS Cost Explorer. This table allows you, as a financial analyst, cloud economist, or DevOps engineer, to query cost-specific details, including costs associated with each tag. You can utilize this table to gather insights on cost allocation, such as identifying the most expensive tags, tracking costs of specific projects, departments, or services, and more. The schema outlines the various attributes of the cost allocation tag, including the tag key, cost, and currency.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_by_tag table provides you with a simplified view of cost by tags in your account. You must specify a granularity (MONTHLY, DAILY) and tag_key_1 to query the table, however, tag_key_2 is optional.

Important Notes

Examples

Basic info

This query is used to gain insights into the daily cost breakdown of AWS services, based on specific tags. It is particularly useful for tracking and managing costs, especially in scenarios where resources are tagged by project, department, or any other category for cost allocation.

select
  tag_key_1,
  tag_value_1,
  period_start,
  blended_cost_amount::numeric::money,
  unblended_cost_amount::numeric::money,
  amortized_cost_amount::numeric::money,
  net_unblended_cost_amount::numeric::money,
  net_amortized_cost_amount::numeric::money
from
  aws_cost_by_tag
where
  granularity = 'DAILY'
and
  tag_key_1 = 'Name';
select
  tag_key_1,
  tag_value_1,
  period_start,
  CAST(blended_cost_amount AS NUMERIC) AS blended_cost_amount,
  CAST(unblended_cost_amount AS NUMERIC) AS unblended_cost_amount,
  CAST(amortized_cost_amount AS NUMERIC) AS amortized_cost_amount,
  CAST(net_unblended_cost_amount AS NUMERIC) AS net_unblended_cost_amount,
  CAST(net_amortized_cost_amount AS NUMERIC) AS net_amortized_cost_amount
from
  aws_cost_by_tag
where
  granularity = 'DAILY'
and
  tag_key_1 = 'Name';

Min, Max, and average daily unblended_cost_amount by tag

Discover the segments that have the lowest, highest, and average daily costs associated with a specific tag. This is useful for tracking and managing AWS costs on a day-to-day basis by identifying areas where spending is concentrated.

select
  tag_key_1,
  tag_value_1,
  min(unblended_cost_amount)::numeric::money as min,
  max(unblended_cost_amount)::numeric::money as max,
  avg(unblended_cost_amount)::numeric::money as average
from
  aws_cost_by_tag
where
  granularity = 'DAILY'
and
  tag_key_1 = 'Name'
group by
  tag_key_1, tag_value_1;
select
  tag_key_1,
  tag_value_1,
  min(unblended_cost_amount) as min,
  max(unblended_cost_amount) as max,
  avg(unblended_cost_amount) as average
from
  aws_cost_by_tag
where
  granularity = 'DAILY'
and
  tag_key_1 = 'Name'
group by
  tag_key_1, tag_value_1;

Ranked - Top 10 Most expensive days (unblended_cost_amount) by tag

Discover the segments that are the top 10 most costly, based on daily expenditures, to identify potential areas of cost reduction. This is particularly useful for those looking to optimize their resource utilization and manage their budget effectively.

with ranked_costs as
(
  select
    tag_key_1,
    tag_value_1,
    period_start,
    unblended_cost_amount::numeric::money,
    rank() over(partition by tag_key_1
  order by
    unblended_cost_amount desc)
  from
    aws_cost_by_tag
  where
    granularity = 'DAILY'
    and tag_key_1 = 'Name'
)
select
  *
from
  ranked_costs
where
  rank <= 10;
Error: SQLite does not support the rank window function.
title description
Steampipe Table: aws_cost_forecast_daily - Query AWS Cost Explorer Daily Cost Forecast using SQL
Allows users to query AWS Cost Explorer's daily cost forecast data, providing insights into projected daily costs based on historical data.

Table: aws_cost_forecast_daily - Query AWS Cost Explorer Daily Cost Forecast using SQL

The AWS Cost Explorer Daily Cost Forecast is a feature of AWS that allows you to predict your future AWS costs based on your past spending. It uses machine learning algorithms to create a model of your past behavior and estimate your future costs. It provides an SQL interface for querying these forecasts, making it easy to integrate into your existing data analysis workflows.

Table Usage Guide

The aws_cost_forecast_daily table in Steampipe provides you with daily cost forecasts within AWS Cost Explorer. This table allows you, as a financial analyst, DevOps engineer, or cloud administrator, to query forecasted daily costs based on historical data. You can utilize this table to gather insights on your future costs, such as projected increases or decreases in expenses, cost trends, and more. The schema outlines the various attributes of your daily cost forecast, including the date, forecasted amount, and forecasted unit.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_forecast_daily table retrieves a forecast for how much Amazon Web Services predicts that you will spend each day over the next 4 months, based on your past costs.

Important Notes

Examples

Basic info

Explore the daily cost forecast for AWS, allowing you to understand and predict your expenditure over time. This can assist in budget planning and identifying potential cost-saving opportunities.


select 
   period_start,
   period_end,
   mean_value::numeric::money   
from 
  aws_cost_forecast_daily
order by
  period_start;
select 
   period_start,
   period_end,
   cast(mean_value as decimal) as mean_value   
from 
  aws_cost_forecast_daily
order by
  period_start;
title description
Steampipe Table: aws_cost_forecast_monthly - Query AWS Cost Explorer Cost Forecast using SQL
Allows users to query Cost Forecasts in AWS Cost Explorer for monthly cost predictions.

Table: aws_cost_forecast_monthly - Query AWS Cost Explorer Cost Forecast using SQL

The AWS Cost Explorer Cost Forecast is a feature of AWS that provides you with the ability to forecast your AWS costs. It uses your historical cost data to predict future expenses, enabling you to manage your budget more effectively. The forecasts are generated using machine learning algorithms and can be customized for different time periods, services, and tags.

Table Usage Guide

The aws_cost_forecast_monthly table in Steampipe provides you with information about your monthly cost forecasts within AWS Cost Explorer. This table allows you, as a financial analyst or cloud cost manager, to query cost forecast details, including predicted costs, end and start dates, and associated metadata. You can utilize this table to gather insights on your future costs, such as predicted expenses for the next month, verification of cost trends, and more. The schema outlines the various attributes of your cost forecast, including the time period, value, and forecast results by time.

Amazon Cost Explorer helps you visualize, understand, and manage your AWS costs and usage. The aws_cost_forecast_monthly table retrieves a forecast for how much Amazon Web Services predicts that you will spend each month over the next 12 months, based on your past costs.

Important Notes

Examples

Basic info

Assess the elements within your AWS cost forecast on a monthly basis to better understand your spending trends and budget accordingly. This query allows you to analyze your cost data over time, helping you to identify potential cost-saving opportunities and manage your AWS resources more effectively.


select 
   period_start,
   period_end,
   mean_value::numeric::money  
from 
  aws_cost_forecast_monthly
order by
  period_start;
select 
   period_start,
   period_end,
   cast(mean_value as real) as mean_value
from 
  aws_cost_forecast_monthly
order by
  period_start;

Month on month forecasted growth

Gain insights into the monthly growth forecast by comparing the current month's mean value with the previous month's. This allows for a clear understanding of the growth percentage change, which can aid in future planning and budgeting.

with cost_data as (
  select
    period_start,
    mean_value as this_month,
    lag(mean_value,-1) over(order by period_start desc) as previous_month
  from 
    aws_cost_forecast_monthly
)
select
    period_start,
    this_month::numeric::money,
    previous_month::numeric::money,
    case 
      when previous_month = 0 and this_month = 0  then 0
      when previous_month = 0 then 999
      else round((100 * ( (this_month - previous_month) / previous_month))::numeric, 2) 
    end as percent_change
from
  cost_data
order by
  period_start;
with cost_data as (
  select
    period_start,
    mean_value as this_month,
    lag(mean_value,-1) over(order by period_start desc) as previous_month
  from 
    aws_cost_forecast_monthly
)
select
    period_start,
    this_month,
    previous_month,
    case 
      when previous_month = 0 and this_month = 0  then 0
      when previous_month = 0 then 999
      else round((100 * ( (this_month - previous_month) / previous_month)), 2) 
    end as percent_change
from
  cost_data
order by
  period_start;
title description
Steampipe Table: aws_cost_usage - Query AWS Cost Explorer Service Cost and Usage using SQL
Allows users to query Cost and Usage data from AWS Cost Explorer Service to monitor, track, and manage AWS costs and usage over time.

Table: aws_cost_usage - Query AWS Cost Explorer Service Cost and Usage using SQL

The AWS Cost Explorer Service is a tool that allows you to visualize, understand, and manage your AWS costs and usage over time. It provides detailed information about your costs and usage, including trends, cost drivers, and anomalies. With Cost Explorer, you can filter views by various dimensions such as service, linked account, and tags, and view data for up to the last 13 months.

Table Usage Guide

The aws_cost_usage table in Steampipe provides you with information about cost and usage data from AWS Cost Explorer Service. This table enables you as a financial analyst or cloud architect to query cost and usage details, including cost allocation tags, service usage, cost usage, and associated metadata. You can utilize this table to gather insights on cost and usage, such as cost per service, usage per service, verification of cost allocation tags, and more. The schema outlines the various attributes of the cost and usage data for you, including the time period, unblended cost, usage type, and associated tags.

Amazon Cost Explorer assists you in visualizing, understanding, and managing your AWS costs and usage. The aws_cost_usage table offers you a simplified yet flexible view of cost for your account (or all linked accounts when run against the organization master). You need to specify a granularity (MONTHLY, DAILY), and 2 dimension types (AZ , INSTANCE_TYPE, LEGAL_ENTITY_NAME, LINKED_ACCOUNT, OPERATION, PLATFORM, PURCHASE_TYPE, SERVICE, TENANCY, RECORD_TYPE, and USAGE_TYPE)

Important Notes

  • This table requires an '=' qualifier for all of the following columns: granularity, dimension_type_1, dimension_type_2.
  • The pricing for the Cost Explorer API is per API request - Each request will incur a cost of $0.01 for you.

Examples

Monthly net unblended cost by account and service

Explore the monthly expenditure for each linked account and service in your AWS environment. This query can help you understand your cost trends and identify areas for potential savings.

select
  period_start,
  dimension_1 as account_id,
  dimension_2 as service_name,
  net_unblended_cost_amount::numeric::money
from
  aws_cost_usage
where
  granularity = 'MONTHLY'
  and dimension_type_1 = 'LINKED_ACCOUNT'
  and dimension_type_2 = 'SERVICE'
order by
  dimension_1,
  period_start;
select
  period_start,
  dimension_1 as account_id,
  dimension_2 as service_name,
  cast(net_unblended_cost_amount as real) as net_unblended_cost_amount
from
  aws_cost_usage
where
  granularity = 'MONTHLY'
  and dimension_type_1 = 'LINKED_ACCOUNT'
  and dimension_type_2 = 'SERVICE'
order by
  dimension_1,
  period_start;

Top 5 most expensive services (net unblended cost) in each account

Identify the top five most costly services in each account to manage and optimize your AWS expenses effectively.

with ranked_costs as (
  select
    dimension_1 as account_id,
    dimension_2 as service_name,
    sum(net_unblended_cost_amount)::numeric::money as net_unblended_cost,
    rank() over(partition by dimension_1 order by sum(net_unblended_cost_amount) desc)
  from
    aws_cost_usage
  where
    granularity = 'MONTHLY'
    and dimension_type_1 = 'LINKED_ACCOUNT'
    and dimension_type_2 = 'SERVICE'
  group by
    dimension_1,
    dimension_2
  order by
    dimension_1,
    net_unblended_cost desc
)
select * from ranked_costs where rank <=5
Error: SQLite does not support rank window functions.

Monthly net unblended cost by account and record type

Analyze your monthly AWS account costs by record type to better understand your expenses. This can help you identify areas where costs may be reduced or controlled.

select
  period_start,
  dimension_1 as account_id,
  dimension_2 as record_type,
  net_unblended_cost_amount::numeric::money
from
  aws_cost_usage
where
  granularity = 'MONTHLY'
  and dimension_type_1 = 'LINKED_ACCOUNT'
  and dimension_type_2 = 'RECORD_TYPE'
order by
  dimension_1,
  period_start;
select
  period_start,
  dimension_1 as account_id,
  dimension_2 as record_type,
  CAST(net_unblended_cost_amount AS REAL) AS net_unblended_cost_amount
from
  aws_cost_usage
where
  granularity = 'MONTHLY'
  and dimension_type_1 = 'LINKED_ACCOUNT'
  and dimension_type_2 = 'RECORD_TYPE'
order by
  dimension_1,
  period_start;

List monthly discounts and credits by account

This query allows users to monitor their AWS account's monthly spending by tracking discounts and credits. It's beneficial for budgeting purposes and helps in optimizing cost management strategies.

select
  period_start,
  dimension_1 as account_id,
  dimension_2 as record_type,
  net_unblended_cost_amount::numeric::money
from
  aws_cost_usage
where
  granularity = 'MONTHLY'
  and dimension_type_1 = 'LINKED_ACCOUNT'
  and dimension_type_2 = 'RECORD_TYPE'
  and dimension_2 in ('DiscountedUsage', 'Credit')
order by
  dimension_1,
  period_start;
select
  period_start,
  dimension_1 as account_id,
  dimension_2 as record_type,
  CAST(net_unblended_cost_amount AS REAL) as net_unblended_cost_amount
from
  aws_cost_usage
where
  granularity = 'MONTHLY'
  and dimension_type_1 = 'LINKED_ACCOUNT'
  and dimension_type_2 = 'RECORD_TYPE'
  and dimension_2 in ('DiscountedUsage', 'Credit')
order by
  dimension_1,
  period_start;
title description
Steampipe Table: aws_dax_cluster - Query AWS DAX Clusters using SQL
Allows users to query AWS DAX Clusters to fetch details about their configurations, status, nodes, and other associated metadata.

Table: aws_dax_cluster - Query AWS DAX Clusters using SQL

The AWS DAX Cluster is a fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x read performance improvement. It operates fully in-memory and is compatible with existing DynamoDB API calls. DAX does all the heavy lifting to deliver accelerated read performance and can be used without application changes.

Table Usage Guide

The aws_dax_cluster table in Steampipe provides you with information about AWS DAX Clusters. This table allows you, as a DevOps engineer, to query cluster-specific details, including cluster names, node types, status, and associated metadata. You can utilize this table to gather insights on clusters, such as cluster configurations, status, nodes, and more. The schema outlines the various attributes of the DAX cluster for you, including the cluster name, ARN, status, node type, and associated tags.

Examples

Basic info

Determine the status and region of active nodes in your AWS DAX clusters to understand their configuration and performance. This helps in managing resources and planning for scalability.

select
  cluster_name,
  description,
  active_nodes,
  iam_role_arn,
  status,
  region
from
  aws_dax_cluster;
select
  cluster_name,
  description,
  active_nodes,
  iam_role_arn,
  status,
  region
from
  aws_dax_cluster;

List clusters that does not enforce server-side encryption (SSE)

Determine the areas in your AWS DAX clusters where server-side encryption is not enforced. This is beneficial for identifying potential security vulnerabilities within your system.

select
  cluster_name,
  description,
  sse_description ->> 'Status' as sse_status
from
  aws_dax_cluster
where
  sse_description ->> 'Status' = 'DISABLED';
select
  cluster_name,
  description,
  json_extract(sse_description, '$.Status') as sse_status
from
  aws_dax_cluster
where
  json_extract(sse_description, '$.Status') = 'DISABLED';

List clusters provisioned with undesired (for example, cache.m5.large and cache.m4.4xlarge are desired) node types

Determine the areas in which clusters are provisioned with non-preferred node types to optimize resource allocation and cost efficiency.

select
  cluster_name,
  node_type,
  count(*) as count
from
  aws_dax_cluster
where
  node_type not in ('cache.m5.large', 'cache.m4.4xlarge')
group by
  cluster_name, node_type;
select
  cluster_name,
  node_type,
  count(*) as count
from
  aws_dax_cluster
where
  node_type not in ('cache.m5.large', 'cache.m4.4xlarge')
group by
  cluster_name, node_type;

Get the network details for each cluster

Discover the segments that provide detailed network information for each cluster, including security group identifiers and availability zones. This can be useful for understanding the network configuration of your clusters and ensuring they are set up correctly.

select
  cluster_name,
  subnet_group,
  sg ->> 'SecurityGroupIdentifier' as sg_id,
  n ->> 'AvailabilityZone' as az_name,
  cluster_discovery_endpoint ->> 'Address' as cluster_discovery_endpoint_address,
  cluster_discovery_endpoint ->> 'Port' as cluster_discovery_endpoint_port
from
  aws_dax_cluster,
  jsonb_array_elements(security_groups) as sg,
  jsonb_array_elements(nodes) as n;
select
  cluster_name,
  subnet_group,
  json_extract(sg.value, '$.SecurityGroupIdentifier') as sg_id,
  json_extract(n.value, '$.AvailabilityZone') as az_name,
  json_extract(cluster_discovery_endpoint, '$.Address') as cluster_discovery_endpoint_address,
  json_extract(cluster_discovery_endpoint, '$.Port') as cluster_discovery_endpoint_port
from
  aws_dax_cluster,
  json_each(security_groups) as sg,
  json_each(nodes) as n;
title description
Steampipe Table: aws_dax_parameter - Query AWS DAX Parameter Groups using SQL
Allows users to query AWS DAX Parameter Groups to retrieve information about their configuration settings.

Table: aws_dax_parameter - Query AWS DAX Parameter Groups using SQL

AWS DAX Parameter Groups are a collection of parameters that you apply to all of the nodes in a DAX cluster. These groups make it easier to manage clusters by enabling you to customize their behavior without having to individually modify each node. They are particularly useful when you want to set consistent parameters across a large number of nodes.

Table Usage Guide

The aws_dax_parameter table in Steampipe provides you with information about Parameter Groups within AWS DynamoDB Accelerator (DAX). This table allows you, as a DevOps engineer, to query parameter group-specific details, including parameter names, types, values, and whether they are modifiable or not. You can utilize this table to gather insights on parameter groups, such as understanding the configurations that control the behavior of your DAX clusters, and to verify if the parameters are set as per your requirements. The schema outlines the various attributes of the DAX parameter group for you, including the parameter name, value, source, data type, and whether it's modifiable or not.

Examples

Basic info

Explore which parameters are in use within your AWS DAX settings to understand their values and types. This information can assist in assessing the configuration for optimal performance and security.

select
  parameter_name,
  parameter_group_name,
  parameter_value,
  data_type,
  parameter_type
from
  aws_dax_parameter;
select
  parameter_name,
  parameter_group_name,
  parameter_value,
  data_type,
  parameter_type
from
  aws_dax_parameter;

Count parameters by parameter group

Identify the distribution of parameters across different parameter groups and regions. This can help you understand how parameters are organized in your AWS DAX environment, which is useful for managing and optimizing your configurations.

select
  parameter_group_name,
  region,
  count(parameter_name) as number_of_parameters
from
  aws_dax_parameter
group by
  parameter_group_name, 
  region;
select
  parameter_group_name,
  region,
  count(parameter_name) as number_of_parameters
from
  aws_dax_parameter
group by
  parameter_group_name, 
  region;

List modifiable parameters

Identify instances where parameters can be modified in your AWS DAX setup. This is useful to understand which aspects of your configuration can be adjusted to optimize performance.

select
  parameter_name,
  parameter_group_name,
  parameter_value,
  data_type,
  parameter_type,
  is_modifiable
from
  aws_dax_parameter
where
  is_modifiable = 'TRUE';
select
  parameter_name,
  parameter_group_name,
  parameter_value,
  data_type,
  parameter_type,
  is_modifiable
from
  aws_dax_parameter
where
  is_modifiable = 'TRUE';

List parameters that are not user defined

Identify the parameters in your AWS DAX that are not user-defined. This can help ensure that system-defined settings are not inadvertently altered, maintaining system stability and performance.

select
  parameter_name,
  change_type,
  parameter_group_name,
  parameter_value,
  data_type,
  parameter_type,
  source
from
  aws_dax_parameter
where
  source <> 'user';
select
parameter_name,
change_type,
parameter_group_name,
parameter_value,
data_type,
parameter_type,
source
from
aws_dax_parameter
where
source != 'user';
title description
Steampipe Table: aws_dax_parameter_group - Query AWS DAX Parameter Groups using SQL
Allows users to query AWS DynamoDB Accelerator (DAX) Parameter Groups, providing details such as parameter group name, ARN, description, and parameter settings.

Table: aws_dax_parameter_group - Query AWS DAX Parameter Groups using SQL

The AWS DAX Parameter Group is a resource that provides a container for database engine parameter values that can be applied to one or more DAX clusters. These parameters act as a means to manage the behavior of the DAX instances within the cluster. In essence, it allows you to establish configurations and settings for your DAX databases, providing customization and control over the DAX environment.

Table Usage Guide

The aws_dax_parameter_group table in Steampipe provides you with information about Parameter Groups within AWS DynamoDB Accelerator (DAX). This table enables you, as a DevOps engineer, to query Parameter Group-specific details, including the group name, ARN, description, and parameter settings. You can utilize this table to gather insights on Parameter Groups, such as their configurations, associated parameters, and more. The schema outlines the various attributes of the DAX Parameter Group for you, including the parameter group name, ARN, description, and associated parameters.

Examples

Basic info

Gain insights into the regions and their associated descriptions within your AWS DAX parameter groups. This can be useful to understand the geographical distribution and purpose of your parameter groups.

select
  parameter_group_name,
  description,
  region
from
  aws_dax_parameter_group;
select
  parameter_group_name,
  description,
  region
from
  aws_dax_parameter_group;

Get cluster details associated with the parameter group

Discover the segments that are linked to a specific parameter group in your DAX clusters. This is useful for assessing the configuration of your clusters and understanding their current state.

select
  p.parameter_group_name,
  c.cluster_name,
  c.node_type,
  c.status
from
  aws_dax_parameter_group as p,
  aws_dax_cluster as c
where
  c.parameter_group ->> 'ParameterGroupName' = p.parameter_group_name;
select
  p.parameter_group_name,
  c.cluster_name,
  c.node_type,
  c.status
from
  aws_dax_parameter_group as p,
  aws_dax_cluster as c
where
  json_extract(c.parameter_group, '$.ParameterGroupName') = p.parameter_group_name;
title description
Steampipe Table: aws_dax_subnet_group - Query AWS DAX Subnet Group using SQL
Allows users to query AWS DAX Subnet Group details, such as the subnet group name, description, VPC ID, and the subnets in the group.

Table: aws_dax_subnet_group - Query AWS DAX Subnet Group using SQL

The AWS DAX Subnet Group is a resource in Amazon DynamoDB Accelerator (DAX) that allows you to specify a particular subnet group when you create a DAX cluster. A subnet group is a collection of subnets (typically private) that you can designate for your clusters running in a virtual private cloud (VPC). This allows you to configure network access to your DAX clusters.

Table Usage Guide

The aws_dax_subnet_group table in Steampipe provides you with information about subnet groups within Amazon DynamoDB Accelerator (DAX). This table allows you, as a DevOps engineer, to query subnet group-specific details, including the subnet group name, description, VPC ID, and the subnets in the group. You can utilize this table to gather insights on subnet groups, such as their associated VPCs, subnet IDs, and more. The schema outlines the various attributes of the DAX subnet group for you, including the subnet group name, VPC ID, subnet ID, and associated tags.

Examples

Basic info

Explore which AWS DAX subnet groups are in use, gaining insights into their associated VPCs and regions. This can be useful for assessing your network's configuration and understanding its geographical distribution.

select
  subnet_group_name,
  description,
  vpc_id,
  subnets,
  region
from
  aws_dax_subnet_group;
select
  subnet_group_name,
  description,
  vpc_id,
  subnets,
  region
from
  aws_dax_subnet_group;

List VPC details for each subnet group

Determine the areas in which each subnet group is associated with specific VPC details. This can be useful in understanding the configuration and state of your network for better resource management and security.

select
  subnet_group_name,
  v.vpc_id,
  v.arn as vpc_arn,
  v.cidr_block as vpc_cidr_block,
  v.state as vpc_state,
  v.is_default as is_default_vpc,
  v.region
from
  aws_dax_subnet_group g
join aws_vpc v
  on v.vpc_id = g.vpc_id;
select
  subnet_group_name,
  v.vpc_id,
  v.arn as vpc_arn,
  v.cidr_block as vpc_cidr_block,
  v.state as vpc_state,
  v.is_default as is_default_vpc,
  v.region
from
  aws_dax_subnet_group g
join aws_vpc v
  on v.vpc_id = g.vpc_id;

List subnet details for each subnet group

This query is useful for gaining insights into the specific details of each subnet group within a network. Using this information, one could optimize network structure, improve resource allocation, or enhance security measures.

select
  subnet_group_name,
  g.vpc_id,
  vs.subnet_arn,
  vs.cidr_block as subnet_cidr_block,
  vs.state as subnet_state,
  vs.availability_zone as subnet_availability_zone,
  vs.region
from
  aws_dax_subnet_group g,
  jsonb_array_elements(subnets) s
join aws_vpc_subnet vs
  on vs.subnet_id = s ->> 'SubnetIdentifier';
select
  subnet_group_name,
  g.vpc_id,
  vs.subnet_arn,
  vs.cidr_block as subnet_cidr_block,
  vs.state as subnet_state,
  vs.availability_zone as subnet_availability_zone,
  vs.region
from
  aws_dax_subnet_group g,
  json_each(subnets) s
join aws_vpc_subnet vs
  on vs.subnet_id = json_extract(s.value, '$.SubnetIdentifier');
title description
Steampipe Table: aws_directory_service_certificate - Query AWS Directory Service Certificates using SQL
Allows users to query AWS Directory Service Certificates to gather information about the certificates associated with AWS Managed Microsoft AD and Simple AD directories.

Table: aws_directory_service_certificate - Query AWS Directory Service Certificates using SQL

The AWS Directory Service Certificate is a component of the AWS Directory Service, which simplifies the setup and management of Windows and Linux directories in the cloud. These certificates are used to establish secure LDAP communications between your applications and your AWS managed directories. They provide an extra layer of security by encrypting your data and establishing a secure connection.

Table Usage Guide

The aws_directory_service_certificate table in Steampipe provides you with information about the certificates associated with AWS Managed Microsoft AD and Simple AD directories. This table allows you as an IT administrator or security professional to query certificate-specific details, including certificate state, expiry date, and associated metadata. You can utilize this table to gather insights on certificates, such as active certificates, expired certificates, and certificates nearing expiry. The schema outlines the various attributes of the Directory Service Certificate for you, including the certificate ID, common name, expiry date, registered date, and the state of the certificate.

Examples

Basic Info

Determine the status and validity of your AWS Directory Service's security certificates. This is particularly useful for maintaining system security by ensuring certificates are up-to-date and appropriately configured.

select
  directory_id,
  certificate_id,
  common_name,
  type,
  state,
  expiry_date_time
from
  aws_directory_service_certificate;
select
  directory_id,
  certificate_id,
  common_name,
  type,
  state,
  expiry_date_time
from
  aws_directory_service_certificate;

List 'MicrosoftAD' type directories

Determine the areas in which 'MicrosoftAD' type directories are being used. This query can be useful to gain insights into the distribution and application of these directories within your AWS environment.

select
  c.certificate_id,
  c.common_name,
  c.directory_id,
  c.type as certificate_type,
  d.name as directory_name,
  d.type as directory_type
from
  aws_directory_service_certificate c,
  aws_directory_service_directory d
where
  d.type = 'MicrosoftAD';
select
  c.certificate_id,
  c.common_name,
  c.directory_id,
  c.type as certificate_type,
  d.name as directory_name,
  d.type as directory_type
from
  aws_directory_service_certificate c,
  aws_directory_service_directory d
where
  d.type = 'MicrosoftAD';

List deregistered certificates

Identify instances where certificates have been deregistered within the AWS directory service. This can be useful in understanding the history of your security configuration and tracking changes over time.

select
  common_name,
  directory_id,
  type,
  state
from
  aws_directory_service_certificate
where
  state = 'Deregistered';
select
  common_name,
  directory_id,
  type,
  state
from
  aws_directory_service_certificate
where
  state = 'Deregistered';

List certificates that will expire in the coming 7 days

Identify the certificates that are due to expire in the next week. This allows you to proactively manage and renew them before they lapse, ensuring continuous and secure operations.

select
  directory_id,
  certificate_id,
  common_name,
  type,
  state,
  expiry_date_time
from
  aws_directory_service_certificate
where
  expiry_date_time >= now() + interval '7' day;
select
  directory_id,
  certificate_id,
  common_name,
  type,
  state,
  expiry_date_time
from
  aws_directory_service_certificate
where
  expiry_date_time >= datetime('now', '+7 day');

Get client certificate auth settings of each certificate

Analyze the authentication settings of each certificate to understand the Online Certificate Status Protocol (OCSP) URL's configuration. This can help in ensuring the certificates are correctly configured for client authentication, thereby enhancing security.

select
  directory_id,
  certificate_id,
  common_name,
  client_cert_auth_settings -> 'OCSPUrl' as ocsp_url
from
  aws_directory_service_certificate;
select
  directory_id,
  certificate_id,
  common_name,
  json_extract(client_cert_auth_settings, '$.OCSPUrl') as ocsp_url
from
  aws_directory_service_certificate;

Retrieve the number of certificates registered in each directory

Determine the distribution of certificates across various directories to understand their allocation and manage resources more effectively.

select
  directory_id,
  count(*) as certificate_count
from
  aws_directory_service_certificate
group by
  directory_id;
select
  directory_id,
  count(*) as certificate_count
from
  aws_directory_service_certificate
group by
  directory_id;

List all certificates that were registered more than a year ago and have not been deregistered

Pinpoint the specific instances where certificates have been registered for over a year and have not yet been deregistered. This can be useful for maintaining security standards and ensuring outdated certificates are properly managed.

select
  common_name,
  directory_id,
  type,
  state
from
  aws_directory_service_certificate
where
  registered_date_time <= now() - interval '1 year'
  and state not like 'Deregister%';
select
  common_name,
  directory_id,
  type,
  state
from
  aws_directory_service_certificate
where
  registered_date_time <= datetime('now', '-1 year')
  and state not like 'Deregister%';

Find the certificate with the latest registration date in each AWS partition

Discover the segments that have the most recent certificate registrations within each AWS partition. This can be useful for maintaining up-to-date security practices and ensuring compliance within your AWS infrastructure.

select
  distinct partition,
  registered_date_time
from
  aws_directory_service_certificate
order by
  partition,
  registered_date_time desc;
select
  distinct partition,
  registered_date_time
from
  aws_directory_service_certificate
order by
  partition,
  registered_date_time desc;
title description
Steampipe Table: aws_directory_service_directory - Query AWS Directory Service Directories using SQL
Allows users to query AWS Directory Service Directories for information about AWS Managed Microsoft AD, AWS Managed AD, and Simple AD directories.

Table: aws_directory_service_directory - Query AWS Directory Service Directories using SQL

The AWS Directory Service provides multiple ways to use Microsoft Active Directory with other AWS services. Directories store information about a network's users, groups, and devices, enabling AWS services and instances to use this information. AWS Directory Service Directories are highly available and scalable, providing a cost-effective way to apply policies and security settings across an AWS environment.

Table Usage Guide

The aws_directory_service_directory table in Steampipe provides you with information about AWS Directory Service Directories. These include AWS Managed Microsoft AD, AWS Managed AD, and Simple AD directories. This table allows you, as a DevOps engineer, to query directory-specific details, including directory ID, type, size, and status, among others. You can utilize this table to gather insights on directories, such as their descriptions, DNS IP addresses, and security group IDs. The schema outlines the various attributes of the Directory Service Directory for you, including its ARN, creation timestamp, alias, and associated tags.

Examples

Basic Info

Explore the basic information linked to your AWS Directory Service to better manage and monitor your resources. This can be particularly useful in maintaining security and compliance within your IT infrastructure.

select
  name,
  arn,
  directory_id
from
  aws_directory_service_directory;
select
  name,
  arn,
  directory_id
from
  aws_directory_service_directory;

List MicrosoftAD type directories

Determine the areas in which MicrosoftAD type directories are being used within your AWS Directory Service. This can help in auditing and managing your AWS resources efficiently.

select
  name,
  arn,
  directory_id,
  type
from
  aws_directory_service_directory
where
  type = 'MicrosoftAD';
select
  name,
  arn,
  directory_id,
  type
from
  aws_directory_service_directory
where
  type = 'MicrosoftAD';

Get details about the shared directories

Discover the segments that share directories within your network. This query is useful to understand the distribution of shared resources, their status, and the accounts they are shared with, helping you maintain a balanced and secure network.

select
  name,
  directory_id,
  sd ->> 'ShareMethod' share_method,
  sd ->> 'ShareStatus' share_status,
  sd ->> 'SharedAccountId' shared_account_id,
  sd ->> 'SharedDirectoryId' shared_directory_id
from
  aws_directory_service_directory,
  jsonb_array_elements(shared_directories) sd;
select
  name,
  directory_id,
  json_extract(sd.value, '$.ShareMethod') as share_method,
  json_extract(sd.value, '$.ShareStatus') as share_status,
  json_extract(sd.value, '$.SharedAccountId') as shared_account_id,
  json_extract(sd.value, '$.SharedDirectoryId') as shared_directory_id
from
  aws_directory_service_directory
join
  json_each(shared_directories) as sd;

Get snapshot limit details of each directory

Identify instances where the snapshot limit of each directory in your AWS Directory Service has been reached. This can help manage storage and prevent any potential disruptions due to reaching the limit.

select
  name,
  directory_id,
  snapshot_limit ->> 'ManualSnapshotsCurrentCount' as manual_snapshots_current_count,
  snapshot_limit ->> 'ManualSnapshotsLimit' as manual_snapshots_limit,
  snapshot_limit ->> 'ManualSnapshotsLimitReached' as manual_snapshots_limit_reached
from
  aws_directory_service_directory;
select
  name,
  directory_id,
  json_extract(snapshot_limit, '$.ManualSnapshotsCurrentCount') as manual_snapshots_current_count,
  json_extract(snapshot_limit, '$.ManualSnapshotsLimit') as manual_snapshots_limit,
  json_extract(snapshot_limit, '$.ManualSnapshotsLimitReached') as manual_snapshots_limit_reached
from
  aws_directory_service_directory;

Get SNS topic details of each directory

Determine the areas in which Simple Notification Service (SNS) topics are linked with each directory in your AWS Directory Service. This can be useful to understand the communication setup and status within your organization's AWS infrastructure.

select
  name,
  directory_id,
  e ->> 'CreatedDateTime' as topic_created_date_time,
  e ->> 'Status' as topic_status,
  e ->> 'TopicArn' as topic_arn,
  e ->> 'TopicName' as topic_name
from
  aws_directory_service_directory,
  jsonb_array_elements(event_topics) as e;
select
  name,
  directory_id,
  json_extract(e.value, '$.CreatedDateTime') as topic_created_date_time,
  json_extract(e.value, '$.Status') as topic_status,
  json_extract(e.value, '$.TopicArn') as topic_arn,
  json_extract(e.value, '$.TopicName') as topic_name
from
  aws_directory_service_directory
join
  json_each(event_topics) as e;
title description
Steampipe Table: aws_directory_servicelog_subscription - Query AWS Directory Service Log Subscription using SQL
Allows users to query AWS Directory Service Log Subscription to obtain detailed information about each log subscription associated with the AWS Directory Service.

Table: aws_directory_servicelog_subscription - Query AWS Directory Service Log Subscription using SQL

The AWS Directory Service Log Subscription is a feature of AWS Directory Service that allows you to monitor directory-related events. It enables you to subscribe to and receive logs of activities such as directory creation, deletion, and modification. This service aids in tracking and responding to security or operational issues related to your AWS Directory Service.

Table Usage Guide

The aws_directory_servicelog_subscription table in Steampipe provides you with information about each log subscription associated with the AWS Directory Service. This table allows you, as a DevOps engineer, to query log subscription-specific details, including the directory ID, log group name, and subscription status. You can utilize this table to gather insights on log subscriptions, such as subscription status, associated log groups, and more. The schema outlines for you the various attributes of the log subscription, including the directory ID, log group name, and subscription status.

Examples

Basic info

Explore the creation dates and associated details of log subscriptions within Amazon Directory Service. This can be useful to track the timeline of log subscription activities and manage the configuration of your AWS Directory Service.

select
  log_group_name,
  partition,
  subscription_created_date_time,
  directory_id,
  title
from
  aws_directory_service_log_subscription;
select
  log_group_name,
  partition,
  subscription_created_date_time,
  directory_id,
  title
from
  aws_directory_service_log_subscription;

Get details of the directory associated to the log subscription

Determine the associations between log subscriptions and their corresponding directories. This is useful for understanding the relationship between specific directories and the logs they generate, aiding in efficient log management and troubleshooting.

select
  s.log_group_name,
  d.name as directory_name,
  d.arn as directory_arn,
  d.directory_id,
  d.type as directory_type
from
  aws_directory_service_log_subscription as s
  left join aws_directory_service_directory as d on s.directory_id = d.directory_id;
select
  s.log_group_name,
  d.name as directory_name,
  d.arn as directory_arn,
  d.directory_id,
  d.type as directory_type
from
  aws_directory_service_log_subscription as s
  left join aws_directory_service_directory as d on s.directory_id = d.directory_id;
title description
Steampipe Table: aws_dlm_lifecycle_policy - Query AWS DLM Lifecycle Policies using SQL
Allows users to query AWS DLM Lifecycle Policies to retrieve detailed information about each policy, including its configuration, status, and tags.

Table: aws_dlm_lifecycle_policy - Query AWS DLM Lifecycle Policies using SQL

The AWS DLM (Data Lifecycle Manager) Lifecycle Policy is a service that automates the creation, retention, and deletion of Amazon EBS volume snapshots. This service eliminates the need for custom scripts and manual operations to manage the lifecycle of EBS volume snapshots. It allows you to manage the lifecycle of your snapshots with policy-based management, reducing the cost and effort of data backup, disaster recovery, and migration tasks.

Table Usage Guide

The aws_dlm_lifecycle_policy table in Steampipe provides you with information about DLM (Data Lifecycle Manager) lifecycle policies within AWS. This table enables you, as a DevOps engineer, to query policy-specific details, including policy ID, policy description, state, status message, and execution details. You can utilize this table to gather insights on policies, such as the policy execution frequency, target tags, and retention rules. The schema outlines the various attributes of the DLM lifecycle policy for you, including policy ARN, creation date, policy details, and associated tags.

Examples

Basic Info

Explore which AWS Data Lifecycle Manager policies have been created and when, to manage and monitor the lifecycle of your AWS resources effectively.

select
  policy_id,
  arn,
  date_created
from
  aws_dlm_lifecycle_policy;
select
  policy_id,
  arn,
  date_created
from
  aws_dlm_lifecycle_policy;

List policies where snapshot sharing is scheduled

Determine the areas in which snapshot sharing is scheduled within your policy settings. This helps to identify potential security risks and ensure data integrity.

select
  policy_id,
  arn,
  date_created,
  policy_type,
  s ->> 'ShareRules' as share_rules
from
  aws_dlm_lifecycle_policy,
  jsonb_array_elements(policy_details -> 'Schedules') s
where 
  s ->> 'ShareRules' is not null;
select
  policy_id,
  arn,
  date_created,
  policy_type,
  json_extract(s.value, '$.ShareRules') as share_rules
from
  aws_dlm_lifecycle_policy,
  json_each(json_extract(policy_details, '$.Schedules')) as s
where 
  json_extract(s.value, '$.ShareRules') is not null;

List policies where cross-region copying is scheduled

Explore policies that have cross-region copying scheduled. This is useful to identify and manage data replication across different geographical areas for redundancy and disaster recovery purposes.

select
  policy_id,
  arn,
  date_created,
  policy_type,
  s ->> 'CrossRegionCopyRules' as cross_region_copy_rules
from
  aws_dlm_lifecycle_policy,
  jsonb_array_elements(policy_details -> 'Schedules') s
where 
  s ->> 'CrossRegionCopyRules' is not null;
select
  policy_id,
  arn,
  date_created,
  policy_type,
  json_extract(s.value, '$.CrossRegionCopyRules') as cross_region_copy_rules
from
  aws_dlm_lifecycle_policy,
  json_each(json_extract(policy_details, '$.Schedules')) as s
where 
  json_extract(s.value, '$.CrossRegionCopyRules') is not null;

List maximum snapshots allowed to be retained after each schedule

Discover the segments that have rules for cross-region copying in your AWS DLM lifecycle policies. This can be useful to manage and optimize your data lifecycle, especially if you have policies that need to retain a certain number of snapshots across different regions for backup or disaster recovery purposes.

select
  policy_id,
  arn,
  date_created,
  policy_type,
  s -> 'RetainRule' ->> 'Count' as retain_count
from
  aws_dlm_lifecycle_policy,
  jsonb_array_elements(policy_details -> 'Schedules') s
where 
  s -> 'RetainRule' is not null;
select
  policy_id,
  arn,
  date_created,
  policy_type,
  json_extract(json_extract(s.value, '$.RetainRule'), '$.Count') as retain_count
from
  aws_dlm_lifecycle_policy,
  json_each(json_extract(policy_details, '$.Schedules')) as s
where 
  json_extract(s.value, '$.RetainRule') is not null;
title description
Steampipe Table: aws_dms_certificate - Query AWS DMS Certificates using SQL
Allows users to query AWS DMS (Database Migration Service) Certificates. This table provides information about SSL/TLS certificates used in AWS DMS for encrypting data during database migration tasks. Certificates play a crucial role in ensuring the security and integrity of data transferred between source and target databases.

Table: aws_dms_certificate - Query AWS DMS Certificates using SQL

AWS DMS (Database Migration Service) Certificate refers to an SSL/TLS certificate used in AWS DMS for encrypting data during the process of migrating databases. This certificate plays a crucial role in ensuring the security and integrity of the data as it is transferred between the source and target databases in a migration task.

Table Usage Guide

The aws_dms_certificate table in Steampipe enables users to query information about AWS DMS Certificates. These certificates are used to secure the data during database migration tasks. Users can retrieve details such as the certificate identifier, ARN, certificate creation date, signing algorithm, valid-to date, and region. Additionally, the table allows users to filter certificates based on various criteria, such as expiration date, signing algorithm, ownership, and more.

Examples

Basic info

Retrieve basic information about AWS DMS Certificates, including their identifiers, ARNs, certificate creation dates, signing algorithms, valid-to dates, and regions. This query provides an overview of the certificates in your AWS environment.

select
  certificate_identifier,
  arn,
  certificate_creation_date,
  signing_algorithm,
  valid_to_date,
  region
from
  aws_dms_certificate;
select
  certificate_identifier,
  arn,
  certificate_creation_date,
  signing_algorithm,
  valid_to_date,
  region
from
  aws_dms_certificate;

List certificates expiring in next 10 days

Identify AWS DMS Certificates that are set to expire within the next 10 days. This query helps you proactively manage certificate renewals.

select
  certificate_identifier,
  arn,
  key_length,
  signing_algorithm,
  valid_to_date
from
  aws_dms_certificate
where
  valid_to_date <= current_date + interval '10' day;
select
  certificate_identifier,
  arn,
  key_length,
  signing_algorithm,
  valid_to_date
from
  aws_dms_certificate
where
  valid_to_date <= date('now', '+10 day');

List certificates with SHA256 signing algorithm

Retrieve AWS DMS Certificates that use the SHA256 with RSA signing algorithm. This query helps you identify certificates with specific security configurations.

select
  certificate_identifier,
  arn,
  signing_algorithm,
  key_length,
  certificate_owner
from
  aws_dms_certificate
where
  signing_algorithm = 'SHA256withRSA';
select
  certificate_identifier,
  arn,
  signing_algorithm,
  key_length,
  certificate_owner
from
  aws_dms_certificate
where
  signing_algorithm = 'SHA256withRSA';

List certificates not owned by the current account

Identify AWS DMS Certificates that are not owned by the current AWS account. This query helps you keep track of certificates associated with other accounts.

select
  certificate_identifier,
  arn,
  certificate_owner,
  account_id
from
  aws_dms_certificate
where
  certificate_owner <> account_id;
select
  certificate_identifier,
  arn,
  certificate_owner,
  account_id
from
  aws_dms_certificate
where
  certificate_owner <> account_id;

Get the number of days left until certificates expire

Retrieve AWS DMS Certificates along with the number of days left until they expire. This query helps you monitor certificate expiration dates.

select
  certificate_identifier,
  arn,
  certificate_owner,
  (valid_to_date - current_date) as days_left,
  region
from
  aws_dms_certificate;
select
  certificate_identifier,
  arn,
  certificate_owner,
  (julianday(valid_to_date) - julianday('now')) as days_left,
  region
from
  aws_dms_certificate;
title description
Steampipe Table: aws_dms_endpoint - Query AWS DMS Endpoints using SQL
Query AWS DMS Endpoints to retrieve connection information for source or target databases in database migration activities.

Table: aws_dms_endpoint - Query AWS DMS Endpoints using SQL

AWS Database Migration Service (DMS) Endpoints are a pivotal component within AWS DMS, delineating the connection details for source or target databases involved in migration tasks. These endpoints are essential for defining the data's origin (source) and destination (target).

Table Usage Guide

The aws_dms_endpoint table in Steampipe allows you to query connection-specific information, such as the endpoint identifier, ARN, database name, endpoint type, and the database engine details. This table is invaluable for DevOps engineers and database administrators overseeing database migrations, as it facilitates the monitoring and management of endpoint configurations and ensures the smooth execution of migration tasks.

Examples

Basic info

Retrieve basic information about AWS DMS Endpoint, including their identifiers, ARNs, certificate, database, endpoint type, engine name, and regions.

select
  endpoint_identifier,
  arn,
  certificate_arn,
  database_name,
  endpoint_type,
  engine_display_name,
  engine_name
from
  aws_dms_endpoint;
select
  endpoint_identifier,
  arn,
  certificate_arn,
  database_name,
  endpoint_type,
  engine_display_name,
  engine_name
from
  aws_dms_endpoint;

List source endpoints

Identify all source endpoints in AWS DMS, showcasing their identifiers, ARNs, display names, types, and engine names.

select
  endpoint_identifier,
  arn,
  engine_display_name,
  endpoint_type,
  engine_name
from
  aws_dms_endpoint
where
  endpoint_type = 'SOURCE';
select
  endpoint_identifier,
  arn,
  engine_display_name,
  endpoint_type,
  engine_name
from
  aws_dms_endpoint
where
  endpoint_type = 'SOURCE';

List MySQL endpoints

Retrieve a comprehensive list of AWS DMS endpoints configured for MySQL databases, including their identifiers, ARNs, engine names, creation times, and MySQL-specific settings."

select
  endpoint_identifier,
  arn,
  engine_name,
  instance_create_time,
  my_sql_settings
from
  aws_dms_endpoint
where
  engine_name = 'mysql';
select
  endpoint_identifier,
  arn,
  engine_name,
  instance_create_time,
  my_sql_settings
from
  aws_dms_endpoint
where
  engine_name = 'mysql';

List endpoints that have SSL enabled

Display all AWS DMS endpoints with SSL encryption enabled, detailing their identifiers, KMS key IDs, server names, service access role ARNs, and SSL modes."

select
  endpoint_identifier,
  kms_key_id,
  server_name,
  service_access_role_arn,
  ssl_mode
from
  aws_dms_endpoint
where
  ssl_mode <> 'none';
select
  endpoint_identifier,
  kms_key_id,
  server_name,
  service_access_role_arn,
  ssl_mode
from
  aws_dms_endpoint
where
  ssl_mode <> 'none';

Get MySQL setting details for MySQL endpoints

Extract detailed MySQL settings for AWS DMS endpoints configured for MySQL, including connection scripts, metadata settings, database names, and other MySQL-specific configurations.

select
  endpoint_identifier,
  arn,
  my_sql_settings ->> 'AfterConnectScript' as after_connect_script,
  (my_sql_settings ->> 'CleanSourceMetadataOnMismatch')::boolean as clean_source_metadata_on_mismatch,
  my_sql_settings ->> 'DatabaseName' as database_name,
  (my_sql_settings ->> 'EventsPollInterval')::integer as events_poll_interval,
  (my_sql_settings ->> 'ExecuteTimeout')::integer as execute_timeout,
  (my_sql_settings ->> 'MaxFileSize')::integer as max_file_size,
  (my_sql_settings ->> 'ParallelLoadThreads')::integer as parallel_load_threads,
  my_sql_settings ->> 'Password' as password,
  (my_sql_settings ->> 'Port')::integer as port,
  my_sql_settings ->> 'SecretsManagerAccessRoleArn' as secrets_manager_access_role_arn,
  my_sql_settings ->> 'SecretsManagerSecretId' as secrets_manager_secret_id,
  my_sql_settings ->> 'ServerName' as server_name,
  my_sql_settings ->> 'ServerTimezone' as server_timezone,
  my_sql_settings ->> 'TargetDbType' as target_db_type,
  my_sql_settings ->> 'Username' as username
from
  aws_dms_endpoint
where
  engine_name = 'mysql';
select
  endpoint_identifier,
  arn,
  my_sql_settings ->> 'AfterConnectScript' as after_connect_script,
  cast(json_extract(my_sql_settings, '$.CleanSourceMetadataOnMismatch') as boolean) as clean_source_metadata_on_mismatch,
  my_sql_settings ->> 'DatabaseName' as database_name,
  cast(json_extract(my_sql_settings, '$.EventsPollInterval') as integer) as events_poll_interval,
  cast(json_extract(my_sql_settings, '$.ExecuteTimeout') as integer) as execute_timeout,
  cast(json_extract(my_sql_settings, '$.MaxFileSize') as integer) as max_file_size,
  cast(json_extract(my_sql_settings, '$.ParallelLoadThreads') as integer) as parallel_load_threads,
  my_sql_settings ->> 'Password' as password,
  cast(json_extract(my_sql_settings, '$.Port') as integer) as port,
  my_sql_settings ->> 'SecretsManagerAccessRoleArn' as secrets_manager_access_role_arn,
  my_sql_settings ->> 'SecretsManagerSecretId' as secrets_manager_secret_id,
  my_sql_settings ->> 'ServerName' as server_name,
  my_sql_settings ->> 'ServerTimezone' as server_timezone,
  my_sql_settings ->> 'TargetDbType' as target_db_type,
  my_sql_settings ->> 'Username' as username
from
  aws_dms_endpoint
where
  engine_name = 'mysql';
title description
Steampipe Table: aws_dms_replication_instance - Query AWS Database Migration Service Replication Instances using SQL
Allows users to query AWS Database Migration Service Replication Instances and provides information about each replication instance in an AWS DMS (Database Migration Service).

Table: aws_dms_replication_instance - Query AWS Database Migration Service Replication Instances using SQL

The AWS Database Migration Service Replication Instances are fully managed, serverless instances that enable the migration of data from one type of database to another. They facilitate homogeneous or heterogeneous migrations and can handle continuous data replication with high availability and consolidated auditing. This service significantly simplifies the process of migrating existing data to AWS in a secure and efficient manner.

Table Usage Guide

The aws_dms_replication_instance table in Steampipe provides you with information about each replication instance in an AWS Database Migration Service. This table allows you, as a database administrator, to query replication-specific details, including engine version, instance class, allocated storage, and associated metadata. You can utilize this table to gather insights on replication instances, such as their current state, multi-AZ mode, publicly accessible status, and more. The schema outlines the various attributes of the replication instance, including the replication instance ARN, replication instance identifier, availability zone, and associated tags for you.

Examples

Basic info

Explore which replication instances in your AWS Database Migration Service have public accessibility. This can help identify potential security risks and ensure that your data is properly protected.

select
  replication_instance_identifier,
  arn,
  engine_version,
  instance_create_time,
  kms_key_id,
  publicly_accessible,
  region
from
  aws_dms_replication_instance;
select
  replication_instance_identifier,
  arn,
  engine_version,
  instance_create_time,
  kms_key_id,
  publicly_accessible,
  region
from
  aws_dms_replication_instance;

List replication instances with auto minor version upgrades disabled

Determine the areas in which replication instances have automatic minor version upgrades turned off. This is useful for identifying potential security risks or outdated systems that may require manual updates.

select
  replication_instance_identifier,
  arn,
  engine_version,
  instance_create_time,
  auto_minor_version_upgrade,
  region
from
  aws_dms_replication_instance
where
  not auto_minor_version_upgrade;
select
  replication_instance_identifier,
  arn,
  engine_version,
  instance_create_time,
  auto_minor_version_upgrade,
  region
from
  aws_dms_replication_instance
where
  auto_minor_version_upgrade = 0;

List replication instances provisioned with undesired (for example, dms.r5.16xlarge and dms.r5.24xlarge are not desired) instance classes

Determine the areas in which replication instances are provisioned with instance classes that are not preferred, such as dms.r5.16xlarge and dms.r5.24xlarge. This enables you to identify and rectify instances that may not meet your specific requirements or standards.

select
  replication_instance_identifier,
  arn,
  engine_version,
  instance_create_time,
  replication_instance_class,
  region
from
  aws_dms_replication_instance
where
  replication_instance_class not in ('dms.r5.16xlarge', 'dms.r5.24xlarge');
select
  replication_instance_identifier,
  arn,
  engine_version,
  instance_create_time,
  replication_instance_class,
  region
from
  aws_dms_replication_instance
where
  replication_instance_class not in ('dms.r5.16xlarge', 'dms.r5.24xlarge');

List publicly accessible replication instances

Determine the areas in which replication instances are publicly accessible. This can help enhance security by identifying potential vulnerabilities in your system.

select
  replication_instance_identifier,
  arn,
  publicly_accessible,
  region
from
  aws_dms_replication_instance
where
  publicly_accessible;
select
  replication_instance_identifier,
  arn,
  publicly_accessible,
  region
from
  aws_dms_replication_instance
where
  publicly_accessible = 1;

List replication instances not using multi-AZ deployment configurations

Identify instances where the replication process is not utilizing multi-AZ deployment configurations. This query is beneficial for pinpointing potential areas of vulnerability in your system, as it highlights where redundancies may not be in place to prevent data loss in the event of an AZ outage.

select
  replication_instance_identifier,
  arn,
  publicly_accessible,
  multi_az,
  region
from
  aws_dms_replication_instance
where
  not multi_az;
select
  replication_instance_identifier,
  arn,
  publicly_accessible,
  multi_az,
  region
from
  aws_dms_replication_instance
where
  multi_az = 0;
title description
Steampipe Table: aws_dms_replication_task - Query AWS DMS Replication Tasks using SQL
Enables users to query AWS DMS Replication Tasks to retrieve detailed information on data migration activities between source and target databases.

Table: aws_dms_replication_task - Query AWS DMS Replication Tasks using SQL

AWS Database Migration Service (DMS) Replication Tasks play a critical role in managing data migrations between source and target databases. These tasks facilitate the entire migration process, supporting various migration types, including full load migrations, ongoing replication to synchronize source and target databases and change data capture (CDC) for applying data modifications.

The aws_dms_replication_task table in Steampipe allows for in-depth analysis of replication tasks, providing details such as task identifiers, status, migration types, settings, and endpoint ARNs. This table proves essential for database administrators and DevOps engineers overseeing database migrations, offering comprehensive insights into each task's configuration, progress, and performance.

Examples

Basic Info

Query to fetch basic details about DMS replication tasks.

select
  replication_task_identifier,
  arn,
  migration_type,
  status,
  replication_task_creation_date
from
  aws_dms_replication_task;
select
  replication_task_identifier,
  arn,
  migration_type,
  status,
  replication_task_creation_date
from
  aws_dms_replication_task;

Tasks with specific migration types

List replication tasks by a specific migration type, such as 'full-load'.

select
  replication_task_identifier,
  migration_type,
  status
from
  aws_dms_replication_task
where
  migration_type = 'full-load';
select
  replication_task_identifier,
  migration_type,
  status
from
  aws_dms_replication_task
where
  migration_type = 'full-load';

Replication tasks with failures

Identify replication tasks that have failed, focusing on the last failure message.

select
  replication_task_identifier,
  status,
  last_failure_message
from
  aws_dms_replication_task
where
  status = 'failed';
select
  replication_task_identifier,
  status,
  last_failure_message
from
  aws_dms_replication_task
where
  status = 'failed';

Task performance statistics

Examine detailed performance statistics of replication tasks.

select
  replication_task_identifier,
  status,
  replication_task_stats -> 'ElapsedTimeMillis' as elapsed_time_millis,
  replication_task_stats -> 'FreshStartDate' as fresh_start_date,
  replication_task_stats -> 'FullLoadFinishDate' as full_load_finish_date,
  replication_task_stats -> 'FullLoadProgressPercent' as full_load_progress_percent,
  replication_task_stats -> 'FullLoadStartDate' as full_load_start_date,
  replication_task_stats -> 'StartDate' as start_date,
  replication_task_stats -> 'StopDate' as stop_date,
  replication_task_stats -> 'TablesErrored' as tables_errored,
  replication_task_stats -> 'TablesLoaded' as tables_loaded,
  replication_task_stats -> 'TablesLoading' as tables_loading,
  replication_task_stats -> 'TablesQueued' as tables_queued
from
  aws_dms_replication_task;
select
  replication_task_identifier,
  status,
  json_extract(replication_task_stats, '$.ElapsedTimeMillis') as elapsed_time_millis,
  json_extract(replication_task_stats, '$.FreshStartDate') as fresh_start_date,
  json_extract(replication_task_stats, '$.FullLoadFinishDate') as full_load_finish_date,
  json_extract(replication_task_stats, '$.FullLoadProgressPercent') as full_load_progress_percent,
  json_extract(replication_task_stats, '$.FullLoadStartDate') as full_load_start_date,
  json_extract(replication_task_stats, '$.StartDate') as start_date,
  json_extract(replication_task_stats, '$.StopDate') as stop_date,
  json_extract(replication_task_stats, '$.TablesErrored') as tables_errored,
  json_extract(replication_task_stats, '$.TablesLoaded') as tables_loaded,
  json_extract(replication_task_stats, '$.TablesLoading') as tables_loading,
  json_extract(replication_task_stats, '$.TablesQueued') as tables_queued
from
  aws_dms_replication_task;

Get replication instance details

Retrieve replication instance details for the tasks.

select
  t.replication_task_identifier,
  t.arn as task_arn,
  i.replication_instance_class,
  i.engine_version,
  i.publicly_accessible,
  i.dns_name_servers
from
  aws_dms_replication_task t
join aws_dms_replication_instance i on t.replication_instance_arn = i.arn;
select
  t.replication_task_identifier,
  t.arn as task_arn,
  i.replication_instance_class,
  i.engine_version,
  i.publicly_accessible,
  i.dns_name_servers
from
  aws_dms_replication_task as t
join
  aws_dms_replication_instance as i on t.replication_instance_arn = i.arn;

List source endpoint tasks

Query to list tasks associated with source endpoints.

select
  replication_task_identifier,
  source_endpoint_arn,
  status
from
  aws_dms_replication_task
where
  endpoint_type = 'source';
select
  replication_task_identifier,
  source_endpoint_arn,
  status
from
  aws_dms_replication_task
where
  endpoint_type = 'source';

Endpoint type count

Count tasks by endpoint type (source or target).

select
  endpoint_type,
  count(*) as task_count
from
  aws_dms_replication_task
group by
  endpoint_type;
select
  endpoint_type,
  count(*) as task_count
from
  aws_dms_replication_task
group by
  endpoint_type;
title description
Steampipe Table: aws_docdb_cluster - Query Amazon DocumentDB Cluster using SQL
Allows users to query Amazon DocumentDB Clusters for detailed information about their configuration, status, and associated metadata.

Table: aws_docdb_cluster - Query Amazon DocumentDB Cluster using SQL

The Amazon DocumentDB Cluster is a fully managed, MongoDB compatible database service designed for workloads that need high availability, reliability, and scalability. It allows you to store, query, and index JSON data. DocumentDB makes it easy to operate mission critical MongoDB workloads at scale.

Table Usage Guide

The aws_docdb_cluster table in Steampipe provides you with information about Amazon DocumentDB clusters within AWS. This table allows you as a DevOps engineer, database administrator, or other technical professional to query cluster-specific details, including configurations, status, and associated metadata. You can utilize this table to gather insights on clusters, such as their availability, backup and restore settings, encryption status, and more. The schema outlines the various attributes of the DocumentDB cluster for you, including the cluster ARN, creation time, DB subnet group, associated VPC, and backup retention period.

Examples

Basic Info

select
  arn,
  db_cluster_identifier,
  deletion_protection,
  engine,
  status,
  region
from
  aws_docdb_cluster;
select
  arn,
  db_cluster_identifier,
  deletion_protection,
  engine,
  status,
  region
from
  aws_docdb_cluster;

List clusters which are not encrypted

Discover the segments that are not encrypted within your database clusters. This can help enhance your security measures by identifying potential vulnerabilities.

select
  db_cluster_identifier,
  status,
  cluster_create_time,
  kms_key_id,
  storage_encrypted
from
  aws_docdb_cluster
where
  not storage_encrypted;
select
  db_cluster_identifier,
  status,
  cluster_create_time,
  kms_key_id,
  storage_encrypted
from
  aws_docdb_cluster
where
  storage_encrypted = 0;

List clusters where backup retention period is greater than 7 days

Identify instances where the backup retention period for database clusters exceeds a week. This could be useful in managing data storage and ensuring compliance with data retention policies.

select
  db_cluster_identifier,
  backup_retention_period
from
  aws_docdb_cluster
where
  backup_retention_period > 7;
select
  db_cluster_identifier,
  backup_retention_period
from
  aws_docdb_cluster
where
  backup_retention_period > 7;

Get avalability zone count for each cluster

Determine the number of availability zones for each database cluster in your AWS DocumentDB service to better manage and distribute your databases across different zones for high availability and fault tolerance.

select
  db_cluster_identifier,
  jsonb_array_length(availability_zones) as availability_zones_count
from
  aws_docdb_cluster;
select
  db_cluster_identifier,
  json_array_length(availability_zones) as availability_zones_count
from
  aws_docdb_cluster;

List clusters where deletion protection is disabled

Discover the segments that have deletion protection disabled in order to identify potential vulnerabilities and enhance security measures. This is particularly useful in maintaining data integrity by preventing accidental deletions.

select
  db_cluster_identifier,
  status,
  cluster_create_time,
  deletion_protection
from
  aws_docdb_cluster
where
  not deletion_protection;
select
  db_cluster_identifier,
  status,
  cluster_create_time,
  deletion_protection
from
  aws_docdb_cluster
where
  deletion_protection = 0;

List cluster members details

Identify instances where you can assess the status and roles of members within your AWS DocumentDB clusters. This enables you to understand the configuration of each cluster member, including their promotion tier and whether they have write access.

select
  db_cluster_identifier,
  member ->> 'DBClusterParameterGroupStatus' as db_cluster_parameter_group_status,
  member ->> 'DBInstanceIdentifier' as db_instance_identifier,
  member ->> 'IsClusterWriter' as is_cluster_writer,
  member ->> 'PromotionTier' as promotion_tier
from
  aws_docdb_cluster
  cross join jsonb_array_elements(members) as member;
select
  db_cluster_identifier,
  json_extract(member.value, '$.DBClusterParameterGroupStatus') as db_cluster_parameter_group_status,
  json_extract(member.value, '$.DBInstanceIdentifier') as db_instance_identifier,
  json_extract(member.value, '$.IsClusterWriter') as is_cluster_writer,
  json_extract(member.value, '$.PromotionTier') as promotion_tier
from
  aws_docdb_cluster,
  json_each(members) as member;

List clusters where deletion protection is disabled

Determine the areas in which deletion protection is disabled for your clusters. This can help in identifying potential vulnerabilities and ensuring your data is secure.

select
  db_cluster_identifier,
  status,
  cluster_create_time,
  deletion_protection
from
  aws_docdb_cluster
where
  not deletion_protection;
select
  db_cluster_identifier,
  status,
  cluster_create_time,
  deletion_protection
from
  aws_docdb_cluster
where
  not deletion_protection = 0;
title description
Steampipe Table: aws_docdb_cluster_instance - Query Amazon DocumentDB Cluster Instances using SQL
Allows users to query Amazon DocumentDB Cluster Instances to gather detailed information such as instance identifier, cluster identifier, instance class, availability zone, engine version, and more.

Table: aws_docdb_cluster_instance - Query Amazon DocumentDB Cluster Instances using SQL

The Amazon DocumentDB Cluster Instance is a part of Amazon DocumentDB, a fast, scalable, highly available, and fully managed document database service that supports MongoDB workloads. It provides the performance, scalability, and availability you need when operating mission-critical MongoDB workloads at scale. With DocumentDB, you can store, query, and index JSON data.

Table Usage Guide

The aws_docdb_cluster_instance table in Steampipe provides you with information about Amazon DocumentDB Cluster Instances. This table allows you as a DevOps engineer, database administrator, or other technical professional to query detailed information about each cluster instance, such as its identifier, associated cluster identifier, instance class, availability zone, engine version, and other relevant metadata. You can utilize this table to gather insights on the configuration, performance, and status of your DocumentDB cluster instances. The schema outlines the various attributes of the DocumentDB cluster instance, including instance ARN, creation time, instance status, and associated tags for you.

Examples

Basic info

Gain insights into the specifics of your AWS DocumentDB Cluster instances, such as the engine type, version, and instance class. This can be useful for assessing your current configuration and identifying potential areas for optimization or upgrade.

select
  db_instance_identifier,
  db_cluster_identifier,
  engine,
  engine_version,
  db_instance_class,
  availability_zone
from
  aws_docdb_cluster_instance;
select
  db_instance_identifier,
  db_cluster_identifier,
  engine,
  engine_version,
  db_instance_class,
  availability_zone
from
  aws_docdb_cluster_instance;

List instances which are publicly accessible

Identify instances that are accessible to the public, allowing you to review and manage your data's exposure and security. This query is useful for maintaining control over your data privacy and ensuring that only authorized users have access.

select
  db_instance_identifier,
  db_cluster_identifier,
  engine,
  engine_version,
  db_instance_class,
  availability_zone
from
  aws_docdb_cluster_instance
where
  publicly_accessible;
select
  db_instance_identifier,
  db_cluster_identifier,
  engine,
  engine_version,
  db_instance_class,
  availability_zone
from
  aws_docdb_cluster_instance
where
  publicly_accessible = 1;

Get DB subnet group information of each instance

Explore the status and details of your database subnet groups across instances to understand their configuration and ensure optimal database management. This is beneficial for maintaining network efficiency and security in your AWS DocumentDB clusters.

select
  db_subnet_group_arn,
  db_subnet_group_name,
  db_subnet_group_description,
  db_subnet_group_status
from
  aws_docdb_cluster_instance;
select
  db_subnet_group_arn,
  db_subnet_group_name,
  db_subnet_group_description,
  db_subnet_group_status
from
  aws_docdb_cluster_instance;

Get VPC and subnet information of each instance

Determine the areas in which each instance of your database is connected to a VPC and its associated subnet. This is useful for understanding your database's network configuration and ensuring it aligns with your security and performance requirements.

select
  db_instance_identifier as attached_vpc,
  vsg ->> 'VpcSecurityGroupId' as vpc_security_group_id,
  vsg ->> 'Status' as status,
  sub -> 'SubnetAvailabilityZone' ->> 'Name' as subnet_availability_zone,
  sub ->> 'SubnetIdentifier' as subnet_identifier,
  sub -> 'SubnetOutpost' ->> 'Arn' as subnet_outpost,
  sub ->> 'SubnetStatus' as subnet_status
from
  aws_docdb_cluster_instance
  cross join jsonb_array_elements(vpc_security_groups) as vsg
  cross join jsonb_array_elements(subnets) as sub;
select
  db_instance_identifier as attached_vpc,
  json_extract(vsg.value, '$.VpcSecurityGroupId') as vpc_security_group_id,
  json_extract(vsg.value, '$.Status') as status,
  json_extract(json_extract(sub.value, '$.SubnetAvailabilityZone'), '$.Name') as subnet_availability_zone,
  json_extract(sub.value, '$.SubnetIdentifier') as subnet_identifier,
  json_extract(json_extract(sub.value, '$.SubnetOutpost'), '$.Arn') as subnet_outpost,
  json_extract(sub.value, '$.SubnetStatus') as subnet_status
from
  aws_docdb_cluster_instance,
  json_each(vpc_security_groups) as vsg,
  json_each(subnets) as sub;

List instances with unecrypted storage

Identify instances where storage is not encrypted to understand potential vulnerabilities in your database security. This is crucial for ensuring data protection and compliance with security regulations.

select
  db_instance_identifier,
  db_cluster_identifier,
  db_instance_class
from
  aws_docdb_cluster_instance
where
  not storage_encrypted;
select
  db_instance_identifier,
  db_cluster_identifier,
  db_instance_class
from
  aws_docdb_cluster_instance
where
  storage_encrypted = 0;

List instances with cloudwatch logs disabled

Identify instances where DocumentDB clusters in AWS might be vulnerable due to disabled CloudWatch logs. This query is beneficial for improving security and compliance by ensuring that all instances have logging enabled.

select
  db_instance_identifier,
  db_cluster_identifier,
  db_instance_class
from
  aws_docdb_cluster_instance
where
  enabled_cloudwatch_logs_exports is null;
select
  db_instance_identifier,
  db_cluster_identifier,
  db_instance_class
from
  aws_docdb_cluster_instance
where
  enabled_cloudwatch_logs_exports is null;

Get network endpoint information of each instance

Gain insights into the network connectivity of each instance by identifying the network endpoint details. This can be beneficial in diagnosing connectivity issues or planning network configurations.

select
  db_instance_identifier,
  endpoint_address,
  endpoint_hosted_zone_id,
  endpoint_port
from
  aws_docdb_cluster_instance;
select
  db_instance_identifier,
  endpoint_address,
  endpoint_hosted_zone_id,
  endpoint_port
from
  aws_docdb_cluster_instance;
title description
Steampipe Table: aws_docdb_cluster_snapshot - Query Amazon DocumentDB Cluster Snapshot using SQL
Allows users to query Amazon DocumentDB Cluster Snapshots for detailed information about their configuration, status, and associated metadata.

Table: aws_docdb_cluster_snapshot - Query Amazon DocumentDB Cluster Snapshots using SQL

The aws_docdb_cluster_snapshot table provides detailed information about snapshots of Amazon DocumentDB clusters. These snapshots are storage volume snapshots that back up the entire cluster, enabling data recovery and historical analysis.

Table Usage Guide

This table allows DevOps engineers, database administrators, and other technical professionals to query detailed information about Amazon DocumentDB cluster snapshots. Utilize this table to analyze snapshot configurations, encryption statuses, and other metadata. The schema includes attributes of the DocumentDB cluster snapshots, such as identifiers, creation times, and the associated cluster details.

Examples

List of cluster snapshots that are not encrypted

Identify unencrypted cluster snapshots to assess and improve your security posture.

select
  db_cluster_snapshot_identifier,
  snapshot_type,
  not storage_encrypted as storage_not_encrypted,
  split_part(kms_key_id, '/', 1) as kms_key_id
from
  aws_docdb_cluster_snapshot
where
  not storage_encrypted;
select
  db_cluster_snapshot_identifier,
  snapshot_type,
  not storage_encrypted as storage_not_encrypted,
  substr(kms_key_id, 1, instr(kms_key_id, '/') - 1) as kms_key_id
from
  aws_docdb_cluster_snapshot
where
  not storage_encrypted;

Cluster information of each snapshot

Retrieve basic information about each cluster snapshot, including its creation time and the engine details.

select
  db_cluster_snapshot_identifier,
  cluster_create_time,
  engine,
  engine_version
from
  aws_docdb_cluster_snapshot;
select
  db_cluster_snapshot_identifier,
  cluster_create_time,
  engine,
  engine_version
from
  aws_docdb_cluster_snapshot;

Cluster snapshot count per cluster

Determine the number of snapshots taken for each cluster to help manage snapshot policies and storage.

select
  db_cluster_identifier,
  count(db_cluster_snapshot_identifier) as snapshot_count
from
  aws_docdb_cluster_snapshot
group by
  db_cluster_identifier;
select
  db_cluster_identifier,
  count(db_cluster_snapshot_identifier) as snapshot_count
from
  aws_docdb_cluster_snapshot
group by
  db_cluster_identifier;

List of manual cluster snapshots

Filter for manually created cluster snapshots to distinguish them from automatic backups.

select
  db_cluster_snapshot_identifier,
  engine,
  snapshot_type
from
  aws_docdb_cluster_snapshot
where
  snapshot_type = 'manual';
select
  db_cluster_snapshot_identifier,
  engine,
  snapshot_type
from
  aws_docdb_cluster_snapshot
where
  snapshot_type = 'manual';
title description
Steampipe Table: aws_drs_job - Query AWS Data Replication Service Jobs using SQL
Allows users to query AWS Data Replication Service Jobs and retrieve key job details such as job ID, job status, creation time, and more.

Table: aws_drs_job - Query AWS Data Replication Service Jobs using SQL

The AWS Data Replication Service (DRS) Jobs are part of AWS's migration tools that help you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. The AWS DRS Job is an entity that tracks the migration of data between the source and target instances.

Table Usage Guide

The aws_drs_job table in Steampipe provides you with information about jobs within AWS Data Replication Service (DRS). This table allows you, as a DevOps engineer, to query job-specific details, including job status, creation time, end time, and associated metadata. You can utilize this table to gather insights on jobs, such as job progress, replication status, verification of job parameters, and more. The schema outlines the various attributes of the DRS job for you, including the job ID, job type, creation time, end time, and associated tags.

Examples

Basic Info

Determine the status and origin of specific tasks within your AWS Data Recovery Service (DRS). This can help in monitoring ongoing jobs and identifying who initiated them, providing crucial insights for task management and accountability.

select
  title,
  arn,
  status,
  initiated_by
from
  aws_drs_job;
select
  title,
  arn,
  status,
  initiated_by
from
  aws_drs_job;

List jobs that are in pending state

Determine the areas in which tasks are still awaiting completion to better manage workload distribution and resource allocation.

select
  title,
  arn,
  status,
  initiated_by,
  creation_date_time
from
  aws_drs_job
where
  status = 'PENDING';
select
  title,
  arn,
  status,
  initiated_by,
  creation_date_time
from
  aws_drs_job
where
  status = 'PENDING';

List jobs that were started in past 30 days

Identify instances where jobs have been initiated in the past 30 days. This is useful for tracking recent activities and understanding the current workload.

select
  title,
  arn,
  status,
  initiated_by,
  type,
  creation_date_time,
  end_date_time
from
  aws_drs_job
where
  creation_date_time >= now() - interval '30' day;
select
  title,
  arn,
  status,
  initiated_by,
  type,
  creation_date_time,
  end_date_time
from
  aws_drs_job
where
  creation_date_time >= datetime('now', '-30 day');
title description
Steampipe Table: aws_drs_recovery_instance - Query AWS Disaster Recovery Service Recovery Instances using SQL
Allows users to query AWS Disaster Recovery Service Recovery Instances to retrieve information about recovery instances, including instance type, recovery instance ARN, and associated tags.

Table: aws_drs_recovery_instance - Query AWS Disaster Recovery Service Recovery Instances using SQL

The AWS Disaster Recovery Service Recovery Instance is a component of the AWS Disaster Recovery Service, which aids in the recovery of applications and data in the event of a disaster. It allows for the rapid recovery of your IT infrastructure and data by utilizing AWS's robust, scalable, and secure global infrastructure. This service supports recovery scenarios ranging from small customer workload data loss to a complete site outage.

Table Usage Guide

The aws_drs_recovery_instance table in Steampipe provides you with information about recovery instances within AWS Disaster Recovery Service (DRS). This table allows you, as a DevOps engineer, to query recovery instance-specific details, including instance type, recovery instance ARN, and associated tags. You can utilize this table to gather insights on recovery instances, such as instance type, recovery instance ARN, and associated tags. The schema outlines the various attributes of the recovery instance for you, including the instance type, recovery instance ARN, and associated tags.

Examples

Basic Info

Uncover the details of AWS Disaster Recovery Service's recovery instances, such as their current state and associated EC2 instances. This can be useful for maintaining an overview of your disaster recovery setup and ensuring everything is functioning as expected.

select
  recovery_instance_id,
  arn,
  source_server_id,
  ec2_instance_id,
  ec2_instance_state
from
  aws_drs_recovery_instance;
select
  recovery_instance_id,
  arn,
  source_server_id,
  ec2_instance_id,
  ec2_instance_state
from
  aws_drs_recovery_instance;

Get recovery instance properties of each recovery instance

Explore the characteristics of each recovery instance, such as CPU usage, disk activity, identification hints, update time, network interfaces, operating system, and RAM usage. This can help in assessing the performance and resource usage of each instance, aiding in efficient resource management and troubleshooting.

select
  recovery_instance_id
  arn,
  recovery_instance_properties ->> 'Cpus' as recovery_instance_cpus,
  recovery_instance_properties ->> 'Disks' as recovery_instance_disks,
  recovery_instance_properties ->> 'IdentificationHints' as recovery_instance_identification_hints,
  recovery_instance_properties ->> 'LastUpdatedDateTime' as recovery_instance_last_updated_date_time,
  recovery_instance_properties ->> 'NetworkInterfaces' as recovery_instance_network_interfaces,
  recovery_instance_properties ->> 'Os' as recovery_instance_os,
  recovery_instance_properties ->> 'RamBytes' as recovery_instance_ram_bytes
from
  aws_drs_recovery_instance;
select
  recovery_instance_id,
  arn,
  json_extract(recovery_instance_properties, '$.Cpus') as recovery_instance_cpus,
  json_extract(recovery_instance_properties, '$.Disks') as recovery_instance_disks,
  json_extract(recovery_instance_properties, '$.IdentificationHints') as recovery_instance_identification_hints,
  json_extract(recovery_instance_properties, '$.LastUpdatedDateTime') as recovery_instance_last_updated_date_time,
  json_extract(recovery_instance_properties, '$.NetworkInterfaces') as recovery_instance_network_interfaces,
  json_extract(recovery_instance_properties, '$.Os') as recovery_instance_os,
  json_extract(recovery_instance_properties, '$.RamBytes') as recovery_instance_ram_bytes
from
  aws_drs_recovery_instance;

Get failback details of each recovery instance

Determine the status and details of each recovery instance's failback process in your AWS Disaster Recovery Service. This allows you to understand the progress and potential issues in your data recovery efforts.

select
  recovery_instance_id,
  arn,
  source_server_id,
  ec2_instance_id,
  failback ->> 'AgentLastSeenByServiceDateTime' as agent_last_seen_by_service_date_time,
  failback ->> 'ElapsedReplicationDuration' as elapsed_replication_duration,
  failback ->> 'FailbackClientID' as failback_client_id,
  failback ->> 'FailbackClientLastSeenByServiceDateTime' as failback_client_last_seen_by_service_date_time,
  failback ->> 'FailbackInitiationTime' as failback_initiation_time,
  failback -> 'FailbackJobID' as failback_job_id,
  failback -> 'FailbackLaunchType' as failback_launch_type,
  failback -> 'FailbackToOriginalServer' as failback_to_original_server,
  failback -> 'FirstByteDateTime' as failback_first_byte_date_time,
  failback -> 'State' as failback_state
from
  aws_drs_recovery_instance;
select
  recovery_instance_id,
  arn,
  source_server_id,
  ec2_instance_id,
  json_extract(failback, '$.AgentLastSeenByServiceDateTime') as agent_last_seen_by_service_date_time,
  json_extract(failback, '$.ElapsedReplicationDuration') as elapsed_replication_duration,
  json_extract(failback, '$.FailbackClientID') as failback_client_id,
  json_extract(failback, '$.FailbackClientLastSeenByServiceDateTime') as failback_client_last_seen_by_service_date_time,
  json_extract(failback, '$.FailbackInitiationTime') as failback_initiation_time,
  json_extract(failback, '$.FailbackJobID') as failback_job_id,
  json_extract(failback, '$.FailbackLaunchType') as failback_launch_type,
  json_extract(failback, '$.FailbackToOriginalServer') as failback_to_original_server,
  json_extract(failback, '$.FirstByteDateTime') as failback_first_byte_date_time,
  json_extract(failback, '$.State') as failback_state
from
  aws_drs_recovery_instance;

Get data replication info of each recovery instance

Determine the areas in which data replication is occurring within each recovery instance. This can help assess the status and health of your data recovery operations, and identify any potential issues or delays in the replication process.

select
  recovery_instance_id,
  arn,
  data_replication_info -> 'DataReplicationInitiation' ->> 'StartDateTime' as data_replication_start_date_time,
  data_replication_info -> 'DataReplicationInitiation' ->> 'NextAttemptDateTime' as data_replication_next_attempt_date_time,
  data_replication_info ->> 'DataReplicationError' as data_replication_error,
  data_replication_info ->> 'DataReplicationState' as data_replication_state,
  data_replication_info ->> 'ReplicatedDisks' as data_replication_replicated_disks
from
  aws_drs_recovery_instance;
select
  recovery_instance_id,
  arn,
  json_extract(data_replication_info, '$.DataReplicationInitiation.StartDateTime') as data_replication_start_date_time,
  json_extract(data_replication_info, '$.DataReplicationInitiation.NextAttemptDateTime') as data_replication_next_attempt_date_time,
  json_extract(data_replication_info, '$.DataReplicationError') as data_replication_error,
  json_extract(data_replication_info, '$.DataReplicationState') as data_replication_state,
  json_extract(data_replication_info, '$.ReplicatedDisks') as data_replication_replicated_disks
from
  aws_drs_recovery_instance;

List recovery instances that are created for an actual recovery event

Determine the instances created for actual recovery events, allowing you to focus on real-time disaster recovery efforts rather than drills or tests. This is beneficial in tracking and managing genuine recovery instances for efficient resource allocation and response.

select
  recovery_instance_id,
  arn,
  source_server_id,
  ec2_instance_id,
  ec2_instance_state,
  is_drill,
  job_id
from
  aws_drs_recovery_instance
where
  not is_drill;
select
  recovery_instance_id,
  arn,
  source_server_id,
  ec2_instance_id,
  ec2_instance_state,
  is_drill,
  job_id
from
  aws_drs_recovery_instance
where
  is_drill = 0;
title description
Steampipe Table: aws_drs_recovery_snapshot - Query AWS DRS Recovery Snapshot using SQL
Allows users to query DRS Recovery Snapshot data in AWS. It provides information about recovery snapshots within AWS Disaster Recovery Service (DRS). This table can be used to gather insights on recovery snapshots, including their details, associated metadata, and more.

Table: aws_drs_recovery_snapshot - Query AWS DRS Recovery Snapshot using SQL

The AWS Disaster Recovery Service (DRS) Recovery Snapshot is a feature of AWS DRS that allows you to capture the state of your resources at a specific point in time. This is crucial for disaster recovery purposes, enabling you to restore your system to a previous state in case of a disaster. The snapshot includes all of your data, applications, and configurations, providing a comprehensive backup of your resources.

Table Usage Guide

The aws_drs_recovery_snapshot table in Steampipe provides you with information about recovery snapshots within AWS Disaster Recovery Service (DRS). This table enables you, as a DevOps engineer, to query snapshot-specific details, including snapshot ID, associated volume ID, start and end times, and associated metadata. You can utilize this table to gather insights on recovery snapshots, such as snapshot status, volume size, and more. The schema outlines the various attributes of the recovery snapshot for you, including the snapshot ID, volume ID, start and end times, snapshot status, and volume size.

Examples

Basic Info

Discover the segments that require recovery snapshots in your AWS Disaster Recovery Service. This can help you anticipate and manage potential system recovery needs effectively.

select
  snapshot_id,
  source_server_id,
  expected_timestamp,
  timestamp,
  title
from
  aws_drs_recovery_snapshot;
select
  snapshot_id,
  source_server_id,
  expected_timestamp,
  timestamp,
  title
from
  aws_drs_recovery_snapshot;

Get source server details of each recovery snapshot

This query is useful for gaining insights into the origin of each recovery snapshot in a disaster recovery system. It allows users to identify the specific source server of each snapshot, which can be beneficial for system audits, recovery planning, and troubleshooting.

select
  r.snapshot_id,
  r.source_server_id,
  s.arn as source_server_arn,
  s.recovery_instance_id,
  s.replication_direction
from
  aws_drs_recovery_snapshot r,
  aws_drs_source_server as s
where
  r.source_server_id = s.source_server_id;
select
  r.snapshot_id,
  r.source_server_id,
  s.arn as source_server_arn,
  s.recovery_instance_id,
  s.replication_direction
from
  aws_drs_recovery_snapshot r,
  aws_drs_source_server as s
where
  r.source_server_id = s.source_server_id;

Count recovery snapshots by server

Determine the quantity of recovery snapshots for each server to understand the frequency of data recovery measures taken. This is useful for assessing the robustness of your data backup strategy.

select
  source_server_id,
  count(snapshot_id) as recovery_snapshot_count
from
  aws_drs_recovery_snapshot
group by
  source_server_id;
select
  source_server_id,
  count(snapshot_id) as recovery_snapshot_count
from
  aws_drs_recovery_snapshot
group by
  source_server_id;

List recovery snapshots taken in past 30 days

Identify instances where recovery snapshots have been taken in the past 30 days. This is useful for maintaining an up-to-date backup and recovery strategy in your AWS environment.

select
  snapshot_id,
  source_server_id,
  expected_timestamp,
  timestamp
from
  aws_drs_recovery_snapshot
where
  timestamp <= now() - interval '30' day;
select
  snapshot_id,
  source_server_id,
  expected_timestamp,
  timestamp
from
  aws_drs_recovery_snapshot
where
  timestamp <= datetime('now', '-30 day');

Get EBS snapshot details of a recovery snapshot

Determine the specifics of a particular recovery snapshot within your AWS Disaster Recovery service, such as its state, volume size, and encryption details. This can be useful for understanding the properties of your recovery snapshots and ensuring they meet your data security and storage requirements.

select
  r.snapshot_id,
  r.source_server_id,
  s as ebs_snapshot_id,
  e.state as snapshot_state,
  e.volume_size,
  e.volume_id,
  e.encrypted,
  e.kms_key_id,
  e.data_encryption_key_id
from
  aws_drs_recovery_snapshot as r,
  jsonb_array_elements_text(ebs_snapshots) as s,
  aws_ebs_snapshot as e
where
  r.snapshot_id = 'pit-3367d3f930778a9c3'
and
  s = e.snapshot_id;
select
  r.snapshot_id,
  r.source_server_id,
  json_extract(s.value, '$') as ebs_snapshot_id,
  e.state as snapshot_state,
  e.volume_size,
  e.volume_id,
  e.encrypted,
  e.kms_key_id,
  e.data_encryption_key_id
from
  aws_drs_recovery_snapshot as r,
  json_each(r.ebs_snapshots) as s,
  aws_ebs_snapshot as e
where
  r.snapshot_id = 'pit-3367d3f930778a9c3'
and
  json_extract(s.value, '$') = e.snapshot_id;
title description
Steampipe Table: aws_drs_source_server - Query AWS Database Migration Service Source Server using SQL
Allows users to query AWS Database Migration Service Source Servers for detailed information about the replication servers used in database migrations.

Table: aws_drs_source_server - Query AWS Database Migration Service Source Server using SQL

The AWS Database Migration Service (DMS) Source Server is a component of AWS DMS that facilitates the migration of databases to AWS in a secure and efficient manner. It supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to Amazon Aurora. The Source Server is the database instance from which the migration or replication tasks are initiated.

Table Usage Guide

The aws_drs_source_server table in Steampipe provides you with information about source servers within AWS Database Migration Service (DMS). This table allows you, as a DevOps engineer, to query server-specific details, including server type, replication job status, associated replication tasks, and more. You can utilize this table to gather insights on source servers, such as server status, assigned replication tasks, and server configuration details. The schema outlines the various attributes of the source server for you, including the server ID, status, replication job details, and associated tags.

Examples

Basic Info

Explore the status of your last server launch and identify the source server details to understand the origin of your data. This can help in tracing data lineage or diagnosing issues related to specific server launches.

select
  arn,
  last_launch_result,
  source_server_id,
  title
from
  aws_drs_source_server;
select
  arn,
  last_launch_result,
  source_server_id,
  title
from
  aws_drs_source_server;

Get source cloud properties of all source servers

Explore the origin details of all source servers on your cloud platform. This information can be useful to understand the geographical distribution and account ownership of your servers, aiding in resource allocation and risk management strategies.

select
  arn,
  title,
  source_cloud_properties ->> 'OriginAccountID' as source_cloud_origin_account_id,
  source_cloud_properties ->> 'OriginAvailabilityZone' as source_cloud_origin_availability_zone,
  source_cloud_properties ->> 'OriginRegion' as source_cloud_origin_region
from
  aws_drs_source_server;
select
  arn,
  title,
  json_extract(source_cloud_properties, '$.OriginAccountID') as source_cloud_origin_account_id,
  json_extract(source_cloud_properties, '$.OriginAvailabilityZone') as source_cloud_origin_availability_zone,
  json_extract(source_cloud_properties, '$.OriginRegion') as source_cloud_origin_region
from
  aws_drs_source_server;

Get source properties of all source servers

This query helps you gain insights into the properties of all source servers, including CPU, disk details, network interfaces, RAM, and the recommended instance type. It's useful for understanding the capabilities of each server and for making informed decisions on server management and resource allocation.

select
  arn,
  title,
  source_properties ->> 'Cpus' as source_cpus,
  source_properties ->> 'Disks' as source_disks,
  source_properties -> 'IdentificationHints' ->> 'Hostname' as source_hostname,
  source_properties ->> 'NetworkInterfaces' as source_network_interfaces,
  source_properties -> 'Os' ->> 'FullString' as source_os,
  source_properties -> 'RamBytes' as source_ram_bytes,
  source_properties -> 'RecommendedInstanceType' as source_recommended_instance_type,
  source_properties -> 'LastUpdatedDateTime' as source_last_updated_date_time
from
  aws_drs_source_server;
select
  arn,
  title,
  json_extract(source_properties, '$.Cpus') as source_cpus,
  json_extract(source_properties, '$.Disks') as source_disks,
  json_extract(source_properties, '$.IdentificationHints.Hostname') as source_hostname,
  json_extract(source_properties, '$.NetworkInterfaces') as source_network_interfaces,
  json_extract(source_properties, '$.Os.FullString') as source_os,
  json_extract(source_properties, '$.RamBytes') as source_ram_bytes,
  json_extract(source_properties, '$.RecommendedInstanceType') as source_recommended_instance_type,
  json_extract(source_properties, '$.LastUpdatedDateTime') as source_last_updated_date_time
from
  aws_drs_source_server;

Get data replication info of all source servers

Explore the status of data replication across all source servers, identifying any errors and assessing when the next replication attempt will occur. This information can be crucial in ensuring data integrity and timely updates across your network.

select
  arn,
  title,
  data_replication_info -> 'DataReplicationInitiation' ->> 'StartDateTime' as data_replication_start_date_time,
  data_replication_info -> 'DataReplicationInitiation' ->> 'NextAttemptDateTime' as data_replication_next_attempt_date_time,
  data_replication_info ->> 'DataReplicationError' as data_replication_error,
  data_replication_info ->> 'DataReplicationState' as data_replication_state,
  data_replication_info ->> 'ReplicatedDisks' as data_replication_replicated_disks
from
  aws_drs_source_server;
select
  arn,
  title,
  json_extract(data_replication_info, '$.DataReplicationInitiation.StartDateTime') as data_replication_start_date_time,
  json_extract(data_replication_info, '$.DataReplicationInitiation.NextAttemptDateTime') as data_replication_next_attempt_date_time,
  json_extract(data_replication_info, '$.DataReplicationError') as data_replication_error,
  json_extract(data_replication_info, '$.DataReplicationState') as data_replication_state,
  json_extract(data_replication_info, '$.ReplicatedDisks') as data_replication_replicated_disks
from
  aws_drs_source_server;

Get launch configuration settings of all source servers

Explore the launch configuration settings of all source servers to understand their setup and configuration. This can help in assessing the current state of servers for auditing, troubleshooting, or planning purposes.

select
  arn,
  title,
  launch_configuration ->> 'Name' as launch_configuration_name,
  launch_configuration ->> 'CopyPrivateIp' as launch_configuration_copy_private_ip,
  launch_configuration ->> 'CopyTags' as launch_configuration_copy_tags,
  launch_configuration ->> 'Ec2LaunchTemplateID' as launch_configuration_ec2_launch_template_id,
  launch_configuration ->> 'LaunchDisposition' as launch_configuration_disposition,
  launch_configuration ->> 'TargetInstanceTypeRightSizingMethod' as launch_configuration_target_instance_type_right_sizing_method,
  launch_configuration -> 'Licensing' as launch_configuration_licensing,
  launch_configuration -> 'ResultMetadata' as launch_configuration_result_metadata
from
  aws_drs_source_server;
select
  arn,
  title,
  json_extract(launch_configuration, '$.Name') as launch_configuration_name,
  json_extract(launch_configuration, '$.CopyPrivateIp') as launch_configuration_copy_private_ip,
  json_extract(launch_configuration, '$.CopyTags') as launch_configuration_copy_tags,
  json_extract(launch_configuration, '$.Ec2LaunchTemplateID') as launch_configuration_ec2_launch_template_id,
  json_extract(launch_configuration, '$.LaunchDisposition') as launch_configuration_disposition,
  json_extract(launch_configuration, '$.TargetInstanceTypeRightSizingMethod') as launch_configuration_target_instance_type_right_sizing_method,
  json_extract(launch_configuration, '$.Licensing') as launch_configuration_licensing,
  json_extract(launch_configuration, '$.ResultMetadata') as launch_configuration_result_metadata
from
  aws_drs_source_server;

List source servers that failed last recovery launch

Identify instances where the last recovery launch of source servers was unsuccessful, which is crucial for troubleshooting and ensuring the robustness of the disaster recovery system.

select
  title,
  arn,
  last_launch_result,
  source_server_id
from
  aws_drs_source_server
where
  last_launch_result = 'FAILED';
select
  title,
  arn,
  last_launch_result,
  source_server_id
from
  aws_drs_source_server
where
  last_launch_result = 'FAILED';

List disconnected source servers

Identify instances where source servers have become disconnected. This is useful for troubleshooting and maintaining data integrity across your AWS infrastructure.

select
  title,
  arn,
  data_replication_info ->> 'DataReplicationState' as data_replication_state,
  data_replication_info ->> 'DataReplicationError' as data_replication_error,
  data_replication_info -> 'DataReplicationInitiation' ->> 'StartDateTime' as data_replication_start_date_time,
  data_replication_info -> 'DataReplicationInitiation' ->> 'NextAttemptDateTime' as data_replication_next_attempt_date_time
from
  aws_drs_source_server
where
  data_replication_info ->> 'DataReplicationState' = 'DISCONNECTED';
select
  title,
  arn,
  json_extract(data_replication_info, '$.DataReplicationState') as data_replication_state,
  json_extract(data_replication_info, '$.DataReplicationError') as data_replication_error,
  json_extract(data_replication_info, '$.DataReplicationInitiation.StartDateTime') as data_replication_start_date_time,
  json_extract(data_replication_info, '$.DataReplicationInitiation.NextAttemptDateTime') as data_replication_next_attempt_date_time
from
  aws_drs_source_server
where
  json_extract(data_replication_info, '$.DataReplicationState') = 'DISCONNECTED';
title description
Steampipe Table: aws_dynamodb_backup - Query AWS DynamoDB Backup using SQL
Allows users to query DynamoDB Backup details such as backup ARN, backup creation date, backup size, backup status, and more.

Table: aws_dynamodb_backup - Query AWS DynamoDB Backup using SQL

The AWS DynamoDB Backup service provides on-demand and continuous backups of your DynamoDB tables, safeguarding your data for archival and disaster recovery. It enables point-in-time recovery, allowing you to restore your table data from any second in the past 35 days. This service also supports backup and restore actions through AWS Management Console, AWS CLI, and AWS SDKs.

Table Usage Guide

The aws_dynamodb_backup table in Steampipe provides you with information about backups in AWS DynamoDB. This table allows you, as a DevOps engineer, to query backup-specific details, including backup ARN, backup creation date, backup size, backup status, and more. You can utilize this table to gather insights on backups, such as backup status, backup type, size in bytes, and more. The schema outlines the various attributes of the DynamoDB backup for you, including the backup ARN, backup creation date, backup size, backup status, and more.

Examples

List backups with their corresponding tables

Determine the areas in which backups are associated with their corresponding tables in the AWS DynamoDB service. This can be useful for understanding the relationship between backups and tables, aiding in efficient data management and disaster recovery planning.

select
  name,
  table_name,
  table_id
from
  aws_dynamodb_backup;
select
  name,
  table_name,
  table_id
from
  aws_dynamodb_backup;

Basic backup info

Assess the elements within your AWS DynamoDB backup, such as status, type, expiry date, and size. This allows you to manage and optimize your backup strategy effectively.

select
  name,
  backup_status,
  backup_type,
  backup_expiry_datetime,
  backup_size_bytes
from
  aws_dynamodb_backup;
select
  name,
  backup_status,
  backup_type,
  backup_expiry_datetime,
  backup_size_bytes
from
  aws_dynamodb_backup;
title description
Steampipe Table: aws_dynamodb_global_table - Query AWS DynamoDB Global Tables using SQL
Allows users to query AWS DynamoDB Global Tables to gather information about the global tables, including the table name, creation time, status, and other related details.

Table: aws_dynamodb_global_table - Query AWS DynamoDB Global Tables using SQL

The AWS DynamoDB Global Table is a fully managed, multi-region, and multi-active database that provides fast, reliable and secure in-memory data storage and retrieval with seamless scalability. It allows for replication of your Amazon DynamoDB tables in one or more AWS regions, enabling you to access your data from any of these regions and to recover from region-wide failures. This service is suitable for all applications that need to run with low latency and high availability.

Table Usage Guide

The aws_dynamodb_global_table table in Steampipe provides you with information about Global Tables within AWS DynamoDB. This table allows you, as a DevOps engineer, to query global table-specific details, including the table name, creation time, status, and other related details. You can utilize this table to gather insights on global tables, such as the tables' replication status, their regions, and more. The schema outlines for you the various attributes of the DynamoDB Global Table, including the table ARN, creation time, status, and associated tags.

Examples

List of regions where global table replicas are present

Discover the segments that have global table replicas in different regions. This is useful for understanding the geographical distribution of your DynamoDB global tables.

select
  global_table_name,
  rg -> 'RegionName' as region_name
from
  aws_dynamodb_global_table
  cross join jsonb_array_elements(replication_group) as rg;
select
  policy_id,
  arn,
  date_created,
  policy_type,
  json_extract(json_extract(s.value, '$.RetainRule'), '$.Count') as retain_count
from
  aws_dlm_lifecycle_policy,
  json_each(json_extract(policy_details, '$.Schedules')) as s
where 
  json_extract(s.value, '$.RetainRule') is not null;

DynamoDB global table replica info

Explore the status and progress of global replicas in your DynamoDB service. This can help in identifying any inconsistencies or issues in the global data distribution, enabling you to take necessary actions for maintaining data consistency and availability.

select
  global_table_name,
  global_table_status,
  rg -> 'GlobalSecondaryIndexes' as global_secondary_indexes,
  rg -> 'RegionName' as region_name,
  rg -> 'ReplicaInaccessibleDateTime' as replica_inaccessible_date_time,
  rg -> 'ReplicaStatus' as replica_status,
  rg -> 'ReplicaStatusDescription' as replica_status_description,
  rg -> 'ReplicaStatusPercentProgress' as replica_status_percent_progress
from
  aws_dynamodb_global_table
  cross join jsonb_array_elements(replication_group) as rg;
select
  global_table_name,
  global_table_status,
  json_extract(rg.value, '$.GlobalSecondaryIndexes') as global_secondary_indexes,
  json_extract(rg.value, '$.RegionName') as region_name,
  json_extract(rg.value, '$.ReplicaInaccessibleDateTime') as replica_inaccessible_date_time,
  json_extract(rg.value, '$.ReplicaStatus') as replica_status,
  json_extract(rg.value, '$.ReplicaStatusDescription') as replica_status_description,
  json_extract(rg.value, '$.ReplicaStatusPercentProgress') as replica_status_percent_progress
from
  aws_dynamodb_global_table,
  json_each(replication_group) as rg;
title description
Steampipe Table: aws_dynamodb_metric_account_provisioned_read_capacity_util - Query AWS DynamoDB Metrics using SQL
Allows users to query DynamoDB Metrics on account provisioned read capacity utilization.

Table: aws_dynamodb_metric_account_provisioned_read_capacity_util - Query AWS DynamoDB Metrics using SQL

The AWS DynamoDB Metrics service provides detailed performance metrics for your DynamoDB tables. One such metric is the Account Provisioned Read Capacity Utilization, which measures the percentage of provisioned read capacity units that your application consumes. This allows you to monitor your application's read activity and optimize your provisioned read capacity units for cost-effectiveness and performance.

Table Usage Guide

The aws_dynamodb_metric_account_provisioned_read_capacity_util table in Steampipe provides you with information about account provisioned read capacity utilization metrics within AWS DynamoDB. This table allows you, as a DevOps engineer, to query metric-specific details, including the average, maximum, and minimum read capacity utilization. You can utilize this table to gather insights on DynamoDB performance, such as understanding the read capacity utilization of your DynamoDB tables, identifying potential performance bottlenecks, and planning capacity accordingly. The schema outlines the various attributes of the DynamoDB metric, including the region, account_id, and timestamp for you.

Examples

Basic info

Determine the areas in which your AWS DynamoDB account's provisioned read capacity is being utilized. This can help in monitoring resource usage over time and planning for future capacity needs.

select
  account_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_dynamodb_metric_account_provisioned_read_capacity_util
order by
  timestamp;
select
  account_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_dynamodb_metric_account_provisioned_read_capacity_util
order by
  timestamp;

Intervals where throughput exceeds 80 percent

Analyze the instances where the provisioned read capacity utilization of your AWS DynamoDB account exceeds 80 percent. This can help in identifying periods of high demand and assist in capacity planning to ensure optimal performance.

select
  account_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_dynamodb_metric_account_provisioned_read_capacity_util
where
  maximum > 80
order by
  timestamp;
select
  account_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_dynamodb_metric_account_provisioned_read_capacity_util
where
  maximum > 80
order by
  timestamp;
title description
Steampipe Table: aws_dynamodb_metric_account_provisioned_write_capacity_util - Query AWS DynamoDB Metrics using SQL
Allows users to query AWS DynamoDB Metrics for account provisioned write capacity utilization.

Table: aws_dynamodb_metric_account_provisioned_write_capacity_util - Query AWS DynamoDB Metrics using SQL

The AWS DynamoDB Metrics service allows you to monitor the performance characteristics of DynamoDB tables. One such metric is the Account Provisioned Write Capacity Utilization, which provides information about the write capacity units consumed by all tables in your AWS account. This helps you manage your resources effectively and optimize your database's performance.

Table Usage Guide

The aws_dynamodb_metric_account_provisioned_write_capacity_util table in Steampipe provides you with information about the provisioned write capacity utilization metrics at the account level within Amazon DynamoDB. This table allows you, as a DevOps engineer, to query details related to the provisioned write capacity, such as the average, maximum, and minimum write capacity units consumed by all tables in your account. You can utilize this table to monitor the utilization of provisioned write capacity, ensuring optimal performance and identifying potential bottlenecks or over-provisioning. The schema outlines the various attributes of the metric, including your account id, region, timestamp, and the provisioned write capacity units.

Examples

Basic info

Determine the areas in which your AWS DynamoDB is being utilized by understanding the provisioned write capacity over time. This query can help you manage resources more efficiently by identifying peak usage times and patterns.

select
  account_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_dynamodb_metric_account_provisioned_write_capacity_util
order by
  timestamp;
select
  account_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_dynamodb_metric_account_provisioned_write_capacity_util
order by
  timestamp;

Intervals where throughput exceeds 80 percent

Determine the instances where the provisioned write capacity of your AWS DynamoDB exceeds 80 percent. This can be useful to identify periods of high demand and potentially optimize your resource allocation for improved performance.

select
  account_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_dynamodb_metric_account_provisioned_write_capacity_util
where
  maximum > 80
order by
  timestamp;
select
  account_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_dynamodb_metric_account_provisioned_write_capacity_util
where
  maximum > 80
order by
  timestamp;
title description
Steampipe Table: aws_dynamodb_table - Query AWS DynamoDB Tables using SQL
Allows users to query AWS DynamoDB Tables and retrieve detailed information about their configuration, status, and associated attributes.

Table: aws_dynamodb_table - Query AWS DynamoDB Tables using SQL

The AWS DynamoDB service provides fully managed NoSQL database tables that are designed to provide quick and predictable performance by automatically distributing data across multiple servers. These tables support both key-value and document data models, and enable developers to build web, mobile, and IoT applications without worrying about hardware and setup. DynamoDB tables also offer built-in security, in-memory caching, backup and restore, and in-place update capabilities.

Table Usage Guide

The aws_dynamodb_table table in Steampipe provides you with information about tables within AWS DynamoDB. This table allows you, as a DevOps engineer, to query table-specific details, including provisioned throughput, global secondary indexes, local secondary indexes, and associated metadata. You can utilize this table to gather insights on tables, such as their read/write capacity mode, encryption status, and more. The schema outlines the various attributes of the DynamoDB table for you, including the table name, creation date, item count, and associated tags.

Examples

List of Dynamodb tables which are not encrypted with CMK

Identify instances where DynamoDB tables are not encrypted with a Customer Master Key (CMK). This is useful for enhancing security and compliance by ensuring all data is adequately protected.

select
  name,
  sse_description
from
  aws_dynamodb_table
where
  sse_description is null;
select
  name,
  sse_description
from
  aws_dynamodb_table
where
  sse_description is null;

List of tables where continuous backup is not enabled

Explore which tables have not enabled continuous backup, a critical feature for data loss prevention and recovery in AWS DynamoDB. This can help identify potential vulnerabilities and areas for improvement in your database management practices.

select
  name,
  continuous_backups_status
from
  aws_dynamodb_table
where
  continuous_backups_status = 'DISABLED';
select
  name,
  continuous_backups_status
from
  aws_dynamodb_table
where
  continuous_backups_status = 'DISABLED';

Point in time recovery info for each table

Determine the areas in which you can restore your AWS DynamoDB tables by identifying the earliest and latest possible recovery times. This is particularly useful in disaster recovery scenarios, where understanding the recovery timeline is crucial.

select
  name,
  point_in_time_recovery_description ->> 'EarliestRestorableDateTime' as earliest_restorable_date_time,
  point_in_time_recovery_description ->> 'LatestRestorableDateTime' as latest_restorable_date_time,
  point_in_time_recovery_description ->> 'PointInTimeRecoveryStatus' as point_in_time_recovery_status
from
  aws_dynamodb_table;
select
  name,
  json_extract(point_in_time_recovery_description, '$.EarliestRestorableDateTime') as earliest_restorable_date_time,
  json_extract(point_in_time_recovery_description, '$.LatestRestorableDateTime') as latest_restorable_date_time,
  json_extract(point_in_time_recovery_description, '$.PointInTimeRecoveryStatus') as point_in_time_recovery_status
from
  aws_dynamodb_table;

List of tables where streaming is enabled with destination status

Determine the areas in which streaming is enabled and assess the status of these destinations. This is useful for monitoring the health and activity of your streaming destinations.

select
  name,
  d ->> 'StreamArn' as kinesis_stream_arn,
  d ->> 'DestinationStatus' as stream_status
from
  aws_dynamodb_table,
  jsonb_array_elements(streaming_destination -> 'KinesisDataStreamDestinations') as d
select
  name,
  json_extract(d.value, '$.StreamArn') as kinesis_stream_arn,
  json_extract(d.value, '$.DestinationStatus') as stream_status
from
  aws_dynamodb_table,
  json_each(streaming_destination, 'KinesisDataStreamDestinations') as d
title description
Steampipe Table: aws_dynamodb_table_export - Query AWS DynamoDB Table Export using SQL
Allows users to query AWS DynamoDB Table Exports, providing detailed information on the exports of DynamoDB tables including the export time, status, and the exported data format.

Table: aws_dynamodb_table_export - Query AWS DynamoDB Table Export using SQL

The AWS DynamoDB Table Export is a feature within the AWS DynamoDB service that allows users to export data from their DynamoDB tables into an Amazon S3 bucket. This operation provides a SQL-compatible export of your DynamoDB data, enabling comprehensive data analysis and large scale exports without impacting the performance of your applications. The exported data can be in one of the following formats: Amazon Ion or DynamoDB JSON.

Table Usage Guide

The aws_dynamodb_table_export table in Steampipe provides you with information about the exports of DynamoDB tables within AWS DynamoDB. This table allows you, as a DevOps engineer, to query export-specific details, including the export time, the status of the export, and the format of the exported data. You can utilize this table to gather insights on exports, such as the time of the last export, the status of ongoing exports, and the format of previously exported data. The schema outlines the various attributes of the DynamoDB table export for you, including the export ARN, export time, export status, and the exported data format.

Examples

Basic info

Explore the status of your AWS DynamoDB table exports to understand when they ended and their respective formats. This can be useful in managing data exports and ensuring they are successfully stored in the correct S3 bucket.

select
  arn,
  end_time,
  export_format,
  export_status,
  s3_bucket
from
  aws_dynamodb_table_export;
select
  arn,
  end_time,
  export_format,
  export_status,
  s3_bucket
from
  aws_dynamodb_table_export;

List exports that are not completed

Identify instances where the export process from AWS DynamoDB tables is still ongoing. This is useful to monitor the progress of data exports and ensure they are completing as expected.

select
  arn,
  end_time,
  export_format,
  export_status,
  s3_bucket
from
  aws_dynamodb_table_export
where
  export_status <> 'COMPLETED';
select
  arn,
  end_time,
  export_format,
  export_status,
  s3_bucket
from
  aws_dynamodb_table_export
where
  export_status != 'COMPLETED';

List export details from the last 10 days

Explore the details of your recent AWS DynamoDB table exports to ensure they've been completed successfully and sent to the correct S3 bucket. This is particularly useful for maintaining data integrity and tracking export activities over the past 10 days.

select
  arn,
  end_time,
  export_format,
  export_status,
  export_time,
  s3_bucket
from
  aws_dynamodb_table_export
where
  export_time >= now() - interval '10' day;
select
  arn,
  end_time,
  export_format,
  export_status,
  export_time,
  s3_bucket
from
  aws_dynamodb_table_export
where
  export_time >= datetime('now', '-10 day');
title description
Steampipe Table: aws_ebs_snapshot - Query AWS Elastic Block Store (EBS) using SQL
Allows users to query AWS EBS snapshots, providing detailed information about each snapshot's configuration, status, and associated metadata.

Table: aws_ebs_snapshot - Query AWS Elastic Block Store (EBS) using SQL

The AWS Elastic Block Store (EBS) provides durable, block-level storage volumes for use with Amazon EC2 instances. These snapshots are point-in-time copies of your data that are used for enabling disaster recovery, migrating data across regions or accounts, improving backup compliance, or creating dev/test environments. EBS snapshots are incremental, meaning that only the blocks on the device that have changed after your most recent snapshot are saved.

Table Usage Guide

The aws_ebs_snapshot table in Steampipe provides you with information about EBS snapshots within AWS Elastic Block Store (EBS). This table allows you, as a DevOps engineer, to query snapshot-specific details, including snapshot ID, description, status, volume size, and associated metadata. You can utilize this table to gather insights on snapshots, such as snapshots with public permissions, snapshots by volume, and more. The schema outlines the various attributes of the EBS snapshot for you, including the snapshot ID, creation time, volume ID, and associated tags.

Important Notes

  • The aws_ebs_snapshot table lists all private snapshots by default.
  • You can specify an owner alias, owner ID or snapshot ID** in the where clause (where owner_alias=''), (where owner_id='') or (where snapshot_id='') to list public or shared snapshots from a specific AWS account.

Examples

List of snapshots which are not encrypted

Discover the segments that include unencrypted snapshots in your AWS EBS environment. This is beneficial for enhancing your security measures by identifying potential vulnerabilities.

select
  snapshot_id,
  arn,
  encrypted
from
  aws_ebs_snapshot
where
  not encrypted;
select
  snapshot_id,
  arn,
  encrypted
from
  aws_ebs_snapshot
where
  encrypted = 0;

List of EBS snapshots which are publicly accessible

Determine the areas in which EBS snapshots are publicly accessible to identify potential security risks. This query is used to uncover instances where EBS snapshots may be exposed to all users, which could lead to unauthorized data access.

select
  snapshot_id,
  arn,
  volume_id,
  perm ->> 'UserId' as userid,
  perm ->> 'Group' as group
from
  aws_ebs_snapshot
  cross join jsonb_array_elements(create_volume_permissions) as perm
where
  perm ->> 'Group' = 'all';
select
  snapshot_id,
  arn,
  volume_id,
  json_extract(perm, '$.UserId') as userid,
  json_extract(perm, '$.Group') as group
from
  aws_ebs_snapshot,
  json_each(create_volume_permissions) as perm
where
  json_extract(perm, '$.Group') = 'all';

Find the Account IDs with which the snapshots are shared

Determine the accounts that have access to specific snapshots in your AWS EBS setup. This can be useful for auditing purposes, ensuring that only authorized accounts have access to your data.

select
  snapshot_id,
  volume_id,
  perm ->> 'UserId' as account_ids
from
  aws_ebs_snapshot
  cross join jsonb_array_elements(create_volume_permissions) as perm;
select
  snapshot_id,
  volume_id,
  json_extract(perm.value, '$.UserId') as account_ids
from
  aws_ebs_snapshot
  cross join json_each(create_volume_permissions) as perm;

Find the snapshot count per volume

Assess the elements within each volume to determine the number of snapshots associated with it. This can be useful for understanding the backup frequency and data recovery potential for each volume.

select
  volume_id,
  count(snapshot_id) as snapshot_id
from
  aws_ebs_snapshot
group by
  volume_id;
select
  volume_id,
  count(snapshot_id) as snapshot_id
from
  aws_ebs_snapshot
group by
  volume_id;

List snapshots owned by a specific AWS account

Determine the areas in which specific AWS accounts own snapshots. This can be useful for managing and tracking resources across different accounts in a cloud environment.

select
  snapshot_id,
  arn,
  encrypted,
  owner_id
from
  aws_ebs_snapshot
where
  owner_id = '859788737657';
select
  snapshot_id,
  arn,
  encrypted,
  owner_id
from
  aws_ebs_snapshot
where
  owner_id = '859788737657';

Get a specific snapshot by ID

Discover the specific details of a particular snapshot using its unique identifier. This can be useful for auditing purposes, such as confirming the owner or checking if the snapshot is encrypted.

select
  snapshot_id,
  arn,
  encrypted,
  owner_id
from
  aws_ebs_snapshot
where
  snapshot_id = 'snap-07bf4f91353ad71ae';
select
  snapshot_id,
  arn,
  encrypted,
  owner_id
from
  aws_ebs_snapshot
where
  snapshot_id = 'snap-07bf4f91353ad71ae';

List snapshots owned by Amazon (Note: This will attempt to list ALL public snapshots)

Discover the segments that are owned by Amazon, specifically focusing on public snapshots. This is particularly useful for gaining insights into the distribution and ownership of snapshots within the Amazon ecosystem.

select
  snapshot_id,
  arn,
  encrypted,
  owner_id
from
  aws_ebs_snapshot
where
  owner_alias = 'amazon'
select
  snapshot_id,
  arn,
  encrypted,
  owner_id
from
  aws_ebs_snapshot
where
  owner_alias = 'amazon'
title description
Steampipe Table: aws_ebs_volume - Query AWS Elastic Block Store (EBS) using SQL
Allows users to query AWS Elastic Block Store (EBS) volumes for detailed information about their configuration, status, and associated tags.

Table: aws_ebs_volume - Query AWS Elastic Block Store (EBS) using SQL

The AWS Elastic Block Store (EBS) is a high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. It provides persistent block-level storage volumes for use with Amazon EC2 instances. EBS volumes are highly available and reliable storage volumes that can be attached to any running instance and used like a physical hard drive.

Table Usage Guide

The aws_ebs_volume table in Steampipe provides you with information about volumes within AWS Elastic Block Store (EBS). This table allows you, as a DevOps engineer, to query volume-specific details, including size, state, type, and associated metadata. You can utilize this table to gather insights on volumes, such as their encryption status, IOPS performance, and snapshot details. The schema outlines the various attributes of the EBS volume for you, including the volume ID, creation time, attached instances, and associated tags.

Examples

List of unencrypted EBS volumes

Identify instances where EBS volumes in your AWS environment are not encrypted. This is crucial for security audits and ensuring compliance with data protection policies.

select
  volume_id,
  encrypted
from
  aws_ebs_volume
where
  not encrypted;
select
  volume_id,
  encrypted
from
  aws_ebs_volume
where
  encrypted = 0;

List of unattached EBS volumes

Identify instances where EBS volumes in AWS are not attached to any instances. This could help in optimizing resource usage and managing costs by removing unnecessary volumes.

select
  volume_id,
  volume_type
from
  aws_ebs_volume
where
  jsonb_array_length(attachments) = 0;
select
  volume_id,
  volume_type
from
  aws_ebs_volume
where
  json_array_length(attachments) = 0;

List of Provisioned IOPS SSD (io1) volumes

Determine the areas in which Provisioned IOPS SSD (io1) volumes are being used in your AWS infrastructure. This information can help optimize storage performance and costs by identifying potential areas for volume type adjustment.

select
  volume_id,
  volume_type
from
  aws_ebs_volume
where
  volume_type = 'io1';
select
  volume_id,
  volume_type
from
  aws_ebs_volume
where
  volume_type = 'io1';

List of EBS volumes with size more than 100GiB

Identify instances where AWS EBS volumes exceed 100GiB in size. This is useful to manage storage resources and prevent excessive usage.

select
  volume_id,
  size
from
  aws_ebs_volume
where
  size > '100';
select
  volume_id,
  size
from
  aws_ebs_volume
where
  size > 100;

Count the number of EBS volumes by volume type

Identify the distribution of different types of EBS volumes in your AWS environment. This helps in understanding the usage patterns and planning for cost optimization.

select
  volume_type,
  count(volume_type) as count
from
  aws_ebs_volume
group by
  volume_type;
select
  volume_type,
  count(volume_type) as count
from
  aws_ebs_volume
group by
  volume_type;

Find EBS Volumes Attached To Stopped EC2 Instances

Discover the segments that include EBS volumes attached to EC2 instances that are currently in a stopped state. This information can be beneficial to optimize resource allocation and reduce unnecessary costs.

select
  volume_id,
  size,
  att ->> 'InstanceId' as instance_id
from
  aws_ebs_volume
  cross join jsonb_array_elements(attachments) as att
  join aws_ec2_instance as i on i.instance_id = att ->> 'InstanceId'
where
  instance_state = 'stopped';
select
  volume_id,
  size,
  json_extract(att.value, '$.InstanceId') as instance_id
from
  aws_ebs_volume
  join json_each(attachments) as att
  join aws_ec2_instance as i on i.instance_id = json_extract(att.value, '$.InstanceId')
where
  instance_state = 'stopped';

List of Provisioned IOPS SSD (io1) volumes

Identify instances where the SSD volumes with provisioned IOPS (IO1) are being used. This could be beneficial for performance optimization and cost management.

select
  volume_id,
  volume_type
from
  aws_ebs_volume
where
  volume_type = 'io1';
select
  volume_id,
  volume_type
from
  aws_ebs_volume
where
  volume_type = 'io1';
title description
Steampipe Table: aws_ebs_volume_metric_read_ops - Query AWS EBS Volume using SQL
Allows users to query AWS EBS Volume read operations metrics.

Table: aws_ebs_volume_metric_read_ops - Query AWS EBS Volume using SQL

The AWS EBS Volume is a block-level storage device that you can attach to a single EC2 instance. It allows you to persist data past the lifespan of a single Amazon EC2 instance, and the data on an EBS volume is replicated within its availability zone to prevent data loss due to failure. The 'read_ops' metric provides the total number of read operations from an EBS volume, which can be queried using SQL.

Table Usage Guide

The aws_ebs_volume_metric_read_ops table in Steampipe provides you with information about read operations metrics of volumes within AWS Elastic Block Store (EBS). This table allows you, as a DevOps engineer, to query volume-specific details, including the number of read operations that have been completed, the timestamp of the measurement, and associated metadata. You can utilize this table to gather insights on volumes, such as the frequency of read operations, the performance of volumes over time, and more. The schema outlines the various attributes of the EBS volume read operations metrics for you, including the volume id, timestamp, and the number of read operations.

The aws_ebs_volume_metric_read_ops table provides you with metric statistics at 5-minute intervals for the most recent 5 days.

Examples

Basic info

Analyze the performance of your Amazon EBS volumes over time. This query aids in understanding the read operation metrics, including minimum, maximum, average and total read operations, helping you optimize your resource usage and troubleshoot potential issues.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 1000 average read ops

Determine the periods when the average read operations on AWS EBS volumes surpass a certain threshold. This is useful for identifying potential performance bottlenecks and planning for capacity upgrades.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops
where
  average > 1000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops
where
  average > 1000
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 8000 max read ops

Determine the intervals where the operation count on your Elastic Block Store (EBS) volumes exceeds 8000 read operations. This can assist in identifying potential performance issues or bottlenecks in your AWS environment.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops
where
  maximum > 8000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops
where
  maximum > 8000
order by
  volume_id,
  timestamp;

Read, Write, and Total IOPS

Assess the performance of your AWS EBS volumes by examining the average, maximum, and minimum Input/Output Operations Per Second (IOPS). This can help you understand your application’s load on the volumes and plan for capacity or performance improvements.

select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops as r,
  aws_ebs_volume_metric_write_ops as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops as r,
  aws_ebs_volume_metric_write_ops as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
title description
Steampipe Table: aws_ebs_volume_metric_read_ops_daily - Query AWS EBS Volume Metrics using SQL
Allows users to query AWS EBS Volume metrics for daily read operations.

Table: aws_ebs_volume_metric_read_ops_daily - Query AWS EBS Volume Metrics using SQL

The AWS EBS Volume Metrics is a feature of Amazon Elastic Block Store (EBS) that provides raw block-level storage that can be attached to Amazon EC2 instances. These metrics provide visibility into the performance, operation, and overall health of your volumes, allowing you to optimize usage and respond to system-wide performance changes. With the ability to query these metrics using SQL, you can gain insights into read operations on a daily basis, enhancing your ability to monitor and manage your data storage effectively.

Table Usage Guide

The aws_ebs_volume_metric_read_ops_daily table in Steampipe provides you with information about the daily read operations metrics of AWS Elastic Block Store (EBS) volumes. This table allows you, as a system administrator, DevOps engineer, or other technical professional, to query details about the daily read operations performed on EBS volumes, which is useful for your performance analysis, capacity planning, and cost optimization. The schema outlines various attributes of the EBS volume metrics, including the average, maximum, and minimum read operations, as well as the sum of read operations and the time of the metric capture.

The aws_ebs_volume_metric_read_ops_daily table provides you with metric statistics at 24-hour intervals for the last year.

Examples

Basic info

Explore the performance of your AWS EBS volumes over time. This query can help you understand the volume of read operations, which can be useful in assessing system performance, identifying potential bottlenecks, and planning for capacity.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_daily
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_daily
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 1000 average read ops

Discover the instances when the average read operations on AWS EBS volumes exceed 1000. This information can be used to identify potential performance issues or optimize resource allocation.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_daily
where
  average > 1000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_daily
where
  average > 1000
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 8000 max read ops

Determine the instances where the daily read operations on AWS EBS volumes exceed a threshold of 8000. This can be useful in identifying potential performance issues or capacity planning for your storage infrastructure.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_daily
where
  maximum > 8000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_daily
where
  maximum > 8000
order by
  volume_id,
  timestamp;

Read, Write, and Total IOPS

Explore the average, maximum, and minimum Input/Output operations for each volume over time to understand the performance of your storage volumes. This query is useful for identifying any potential bottlenecks or inefficiencies in data transfer operations.

select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops_daily as r,
  aws_ebs_volume_metric_write_ops_daily as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops_daily as r,
  aws_ebs_volume_metric_write_ops_daily as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
title description
Steampipe Table: aws_ebs_volume_metric_read_ops_hourly - Query Amazon EC2 EBS Volume using SQL
Allows users to query Amazon EC2 EBS Volume Read Operations metrics on an hourly basis.

Table: aws_ebs_volume_metric_read_ops_hourly - Query Amazon EC2 EBS Volume using SQL

The AWS EBS (Elastic Block Store) Volume is a high-performance block storage service designed for use with Amazon Elastic Compute Cloud (EC2) for both throughput and transaction intensive workloads at any scale. It offers a range of volume types that are optimized to handle different workloads, including those that require high performance like transactional workloads, and those that require low cost per gigabyte like data warehousing. EBS Volumes are highly available and reliable storage volumes that can be attached to any running instance that is in the same Availability Zone.

Table Usage Guide

The aws_ebs_volume_metric_read_ops_hourly table in Steampipe provides you with information about the read operations metrics of Amazon Elastic Block Store (EBS) volumes within Amazon Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query volume-specific read operations details on an hourly basis, including the number of completed read operations from a volume, average, maximum, and minimum read operations, and the count of data points used for the statistical calculation. You can utilize this table to gather insights on volume performance, monitor the read activity of EBS volumes, and make data-driven decisions for performance optimization. The schema outlines the various attributes of the EBS volume read operations metrics for you, including the volume ID, timestamp, average read operations, and more.

The aws_ebs_volume_metric_read_ops_hourly table provides you with metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Explore the performance of your AWS EBS volume over time. This query allows you to track the number of read operations per hour, helping you to understand usage patterns and optimize resource allocation.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_hourly
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_hourly
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 1000 average read ops

Identify instances where the average read operations on AWS EBS volumes exceed 1000. This can be useful in monitoring and managing resource utilization, helping to optimize performance and prevent potential bottlenecks.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_hourly
where
  average > 1000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_hourly
where
  average > 1000
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 8000 max read ops

Identify instances where your AWS Elastic Block Store (EBS) volumes exceed 8000 maximum read operations per hour. This can help in analyzing the performance of your volumes and take necessary actions if they are under heavy load.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_hourly
where
  maximum > 8000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_read_ops_hourly
where
  maximum > 8000
order by
  volume_id,
  timestamp;

Intervals where volume average iops exceeds provisioned iops

Determine the periods where the average input/output operations per second (IOPS) surpasses the provisioned IOPS for Amazon EBS volumes. This can be used to identify potential performance issues and ensure that the provisioned IOPS meets the application demand.

select 
  r.volume_id,
  r.timestamp,
  v.iops as provisioned_iops,
  round(r.average) +round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg
from 
  aws_ebs_volume_metric_read_ops_hourly as r,
  aws_ebs_volume_metric_write_ops_hourly as w,
  aws_ebs_volume as v
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
  and v.volume_id = r.volume_id 
  and r.average + w.average > v.iops
order by
  r.volume_id,
  r.timestamp;
select 
  r.volume_id,
  r.timestamp,
  v.iops as provisioned_iops,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg
from 
  aws_ebs_volume_metric_read_ops_hourly as r
join
  aws_ebs_volume_metric_write_ops_hourly as w
on 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
join 
  aws_ebs_volume as v
on
  v.volume_id = r.volume_id 
where 
  r.average + w.average > v.iops
order by
  r.volume_id,
  r.timestamp;

Read, Write, and Total IOPS

Explore the performance of your AWS EBS volumes by evaluating the average, maximum, and minimum input/output operations per second (IOPS). This analysis can help identify any unusual activity or potential bottlenecks in your system, allowing you to optimize for better performance.

select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops_hourly as r,
  aws_ebs_volume_metric_write_ops_hourly as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops_hourly as r,
  aws_ebs_volume_metric_write_ops_hourly as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
title description
Steampipe Table: aws_ebs_volume_metric_write_ops - Query AWS Elastic Block Store (EBS) using SQL
Allows users to query AWS Elastic Block Store (EBS) volume write operations metrics.

Table: aws_ebs_volume_metric_write_ops - Query AWS Elastic Block Store (EBS) using SQL

The AWS Elastic Block Store (EBS) is a high-performance block storage service designed for use with Amazon EC2 for both throughput and transaction intensive workloads at any scale. It provides persistent block level storage volumes for use with EC2 instances. The "write_ops" metric represents the number of write operations performed on the EBS volume.

Table Usage Guide

The aws_ebs_volume_metric_write_ops table in Steampipe provides you with information about the write operations metrics of EBS volumes within AWS Elastic Block Store (EBS). This table allows you, as a DevOps engineer, to query volume-specific details, including the number of write operations, the timestamp of the data point, and the statistical value of the data point. You can utilize this table to gather insights on EBS volumes, such as volume performance, write load, and more. The schema outlines the various attributes of the EBS volume write operations metrics for you, including the volume ID, timestamp, and statistical values.

The aws_ebs_volume_metric_write_ops table provides you with metric statistics at 5 minute intervals for the most recent 5 days.

Examples

Basic info

Gain insights into the performance of your AWS EBS volumes over time. This query helps in monitoring the write operations, which aids in identifying potential bottlenecks or performance issues.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 1000 average write ops

Identify instances where the average write operations on AWS EBS volumes exceed 1000. This can be useful in monitoring performance and identifying potential bottlenecks or areas for optimization.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops
where
  average > 1000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops
where
  average > 1000
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 8000 max write ops

Identify instances where the maximum write operations on AWS EBS volumes exceed 8000. This can be useful in understanding the load on your EBS volumes, and may help you optimize your resources for better performance.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops
where
  maximum > 8000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops
where
  maximum > 8000
order by
  volume_id,
  timestamp;

Read, Write, and Total IOPS

Explore the performance of your storage volumes by analyzing the average, maximum, and minimum Input/Output operations per second (IOPS). This allows you to monitor and optimize your storage efficiency, ensuring smooth operations.

select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops as r,
  aws_ebs_volume_metric_write_ops as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops as r
join
  aws_ebs_volume_metric_write_ops as w on r.volume_id = w.volume_id and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
title description
Steampipe Table: aws_ebs_volume_metric_write_ops_daily - Query AWS EBS Volume Metrics using SQL
Allows users to query AWS EBS Volume Metrics for daily write operations.

Table: aws_ebs_volume_metric_write_ops_daily - Query AWS EBS Volume Metrics using SQL

The AWS EBS Volume Metrics provides a way to monitor the performance of your Amazon Elastic Block Store (EBS) volumes. It allows you to capture write operations on a daily basis, which can help you optimize your storage usage. These metrics can be queried using SQL, providing a flexible and efficient way to analyze your EBS performance data.

Table Usage Guide

The aws_ebs_volume_metric_write_ops_daily table in Steampipe provides you with information about the daily write operations metrics of EBS volumes within AWS Elastic Block Store (EBS). This table allows you, as a DevOps engineer, to query volume-specific details, including the number of write operations, the timestamp of data points, and the statistics for the data points. You can utilize this table to gather insights on EBS volumes, such as the volume's write operations performance, pattern of write operations over time, and more. The schema outlines the various attributes of the EBS volume metrics for you, including the average, maximum, minimum, and sum of write operations, as well as the sample count for each data point.

The aws_ebs_volume_metric_write_ops_daily table provides you with metric statistics at 24 hour intervals for the last year.

Examples

Basic info

This query allows you to analyze the daily write operations of AWS EBS volumes. It can be used to gain insights into the performance and usage patterns of your volumes, helping optimize resource allocation and troubleshoot potential issues.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_daily
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_daily
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 1000 average write ops

Identify instances where the daily average write operations on AWS EBS volumes exceed 1000. This is useful for monitoring usage patterns and potentially preventing system overloads.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_daily
where
  average > 1000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_daily
where
  average > 1000
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 8000 max write ops

Determine the instances where the maximum write operations on AWS EBS volumes surpass the 8000 mark. This is useful to identify potential bottlenecks in your storage system and take proactive measures to prevent performance degradation.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_daily
where
  maximum > 8000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_daily
where
  maximum > 8000
order by
  volume_id,
  timestamp;

Read, Write, and Total IOPS

Explore the performance of your AWS EBS volumes by understanding their input/output operations over time. This query will help you analyze the average, maximum, and minimum read/write operations, allowing you to optimize your storage usage and troubleshoot any potential issues.

select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops_daily as r,
  aws_ebs_volume_metric_write_ops_daily as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops_daily as r,
  aws_ebs_volume_metric_write_ops_daily as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
title description
Steampipe Table: aws_ebs_volume_metric_write_ops_hourly - Query AWS EBS Volume Metrics using SQL
Allows users to query AWS EBS Volume Metrics on hourly write operations.

Table: aws_ebs_volume_metric_write_ops_hourly - Query AWS EBS Volume Metrics using SQL

The AWS EBS (Elastic Block Store) Volume Metrics is a feature that allows you to monitor the performance of your EBS volumes for analysis and troubleshooting. With the 'write_ops' metric, you can track the number of write operations performed on a specified EBS volume per hour. This data can be queried using SQL, providing an accessible way to monitor and manage the performance of your EBS volumes.

Table Usage Guide

The aws_ebs_volume_metric_write_ops_hourly table in Steampipe provides you with information about the hourly write operations metrics of AWS Elastic Block Store (EBS) volumes. This table allows you, as a cloud engineer, a member of a DevOps team, or a data analyst, to query and analyze the hourly write operation details of EBS volumes, including the number of write operations and the timestamp of the data points. You can utilize this table to track write operations, monitor EBS performance, and plan capacity. The schema outlines the various attributes of the EBS volume metrics for you, including the volume ID, timestamp, and the number of write operations.

The aws_ebs_volume_metric_write_ops_hourly table provides you with metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Gain insights into the performance of your AWS EBS volumes by analyzing write operations over time. This can assist in identifying potential issues, optimizing resource usage, and planning capacity.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_hourly
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_hourly
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 1000 average write ops

Discover the instances where the average write operations on your AWS EBS volumes exceed 1000 per hour. This can be useful to identify potential performance issues or unusual activity on your volumes.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_hourly
where
  average > 1000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_hourly
where
  average > 1000
order by
  volume_id,
  timestamp;

Intervals where volumes exceed 8000 max write ops

Identify instances where the maximum write operations on AWS EBS volumes exceed 8000 within an hour. This can help monitor and manage storage performance, ensuring optimal operation and preventing potential issues.

select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_hourly
where
  maximum > 8000
order by
  volume_id,
  timestamp;
select
  volume_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_ebs_volume_metric_write_ops_hourly
where
  maximum > 8000
order by
  volume_id,
  timestamp;

Intervals where volume average iops exceeds provisioned iops

Identify instances where the average input/output operations per second (IOPS) surpasses the provisioned IOPS on your AWS EBS volumes. This is crucial for optimizing your storage performance and preventing any potential bottlenecks.

select 
  r.volume_id,
  r.timestamp,
  v.iops as provisioned_iops,
  round(r.average) +round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg
from 
  aws_ebs_volume_metric_read_ops_hourly as r,
  aws_ebs_volume_metric_write_ops_hourly as w,
  aws_ebs_volume as v
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
  and v.volume_id = r.volume_id 
  and r.average + w.average > v.iops
order by
  r.volume_id,
  r.timestamp;
select 
  r.volume_id,
  r.timestamp,
  v.iops as provisioned_iops,
  round(r.average) +round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg
from 
  aws_ebs_volume_metric_read_ops_hourly as r,
  aws_ebs_volume_metric_write_ops_hourly as w,
  aws_ebs_volume as v
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
  and v.volume_id = r.volume_id 
  and r.average + w.average > v.iops
order by
  r.volume_id,
  r.timestamp;

Read, Write, and Total IOPS

Analyze the settings to understand the average, maximum, and minimum input/output operations per second (IOPS) for both read and write operations on AWS EBS volumes. This helps in assessing the performance and identifying any potential bottlenecks in data transfer.

select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops_hourly as r,
  aws_ebs_volume_metric_write_ops_hourly as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
select 
  r.volume_id,
  r.timestamp,
  round(r.average) + round(w.average) as iops_avg,
  round(r.average) as read_ops_avg,
  round(w.average) as write_ops_avg,
  round(r.maximum) + round(w.maximum) as iops_max,
  round(r.maximum) as read_ops_max,
  round(w.maximum) as write_ops_max,
  round(r.minimum) + round(w.minimum) as iops_min,
  round(r.minimum) as read_ops_min,
  round(w.minimum) as write_ops_min
from 
  aws_ebs_volume_metric_read_ops_hourly as r,
  aws_ebs_volume_metric_write_ops_hourly as w
where 
  r.volume_id = w.volume_id
  and r.timestamp = w.timestamp
order by
  r.volume_id,
  r.timestamp;
title description
Steampipe Table: aws_ec2_ami - Query AWS EC2 AMI using SQL
Allows users to query AWS EC2 AMIs (Amazon Machine Images) to retrieve detailed information about each AMI available in the AWS account.

Table: aws_ec2_ami - Query AWS EC2 AMI using SQL

The AWS EC2 AMI (Amazon Machine Image) provides the information necessary to launch an instance, which is a virtual server in the cloud. You specify an AMI when you launch an instance, and you can launch as many instances from the AMI as you need. AMIs are designed to provide a stable, secure, and high performance execution environment for applications running on Amazon EC2.

Table Usage Guide

The aws_ec2_ami table in Steampipe provides you with information about AMIs (Amazon Machine Images) within Amazon Elastic Compute Cloud (Amazon EC2). This table allows you, as a DevOps engineer, system administrator, or other technical professional, to query AMI-specific details, including its attributes, block device mappings, and associated tags. You can utilize this table to gather insights on AMIs, such as identifying unused or outdated AMIs, verifying AMI permissions, and more. The schema outlines the various attributes of the AMI for you, including the AMI ID, creation date, owner, and visibility status.

Important Notes

  • The aws_ec2_ami table only lists images in your account. To list other images shared with you, please use the aws_ec2_ami_shared table.

Examples

Basic info

Explore the different Amazon Machine Images (AMIs) in your AWS EC2 environment to understand their status, location, creation date, visibility, and root device. This is useful for auditing your resources, ensuring security compliance, and managing your infrastructure.

select
  name,
  image_id,
  state,
  image_location,
  creation_date,
  public,
  root_device_name
from
  aws_ec2_ami;
select
  name,
  image_id,
  state,
  image_location,
  creation_date,
  public,
  root_device_name
from
  aws_ec2_ami;

List public AMIs

Discover the segments that contain public Amazon Machine Images (AMIs) to help manage and maintain your AWS resources more effectively.

select
  name,
  image_id,
  public
from
  aws_ec2_ami
where
  public;
select
  name,
  image_id,
  public
from
  aws_ec2_ami
where
  public = 1;

List failed AMIs

Determine the areas in which Amazon Machine Images (AMIs) have failed. This can be useful for troubleshooting and identifying potential issues within your AWS EC2 instances.

select
  name,
  image_id,
  public,
  state
from
  aws_ec2_ami
where
  state = 'failed';
select
  name,
  image_id,
  public,
  state
from
  aws_ec2_ami
where
  state = 'failed';

Get volume info for each AMI

Explore the characteristics of each Amazon Machine Image (AMI), such as volume size and type, encryption status, and deletion policy. This information is vital for managing storage resources efficiently and ensuring data security within your AWS EC2 environment.

select
  name,
  image_id,
  mapping -> 'Ebs' ->> 'VolumeSize' as volume_size,
  mapping -> 'Ebs' ->> 'VolumeType' as volume_type,
  mapping -> 'Ebs' ->> 'Encrypted' as encryption_status,
  mapping -> 'Ebs' ->> 'KmsKeyId' as kms_key,
  mapping -> 'Ebs' ->> 'DeleteOnTermination' as delete_on_termination
from
  aws_ec2_ami
  cross join jsonb_array_elements(block_device_mappings) as mapping;
select
  name,
  image_id,
  json_extract(mapping.value, '$.Ebs.VolumeSize') as volume_size,
  json_extract(mapping.value, '$.Ebs.VolumeType') as volume_type,
  json_extract(mapping.value, '$.Ebs.Encrypted') as encryption_status,
  json_extract(mapping.value, '$.Ebs.KmsKeyId') as kms_key,
  json_extract(mapping.value, '$.Ebs.DeleteOnTermination') as delete_on_termination
from
  aws_ec2_ami,
  json_each(block_device_mappings) as mapping;
title description
Steampipe Table: aws_ec2_ami_shared - Query AWS EC2 AMI using SQL
Allows users to query shared Amazon Machine Images (AMIs) in AWS EC2

Table: aws_ec2_ami_shared - Query AWS EC2 AMI using SQL

The AWS EC2 AMI (Amazon Machine Image) provides the information necessary to launch an instance, which is a virtual server in the cloud. You can specify an AMI when you launch instances, and you can launch as many instances from the AMI as you need. You can also share your own custom AMI with other AWS accounts, enabling them to launch instances with identical configurations.

Table Usage Guide

The aws_ec2_ami_shared table in Steampipe provides you with information about shared Amazon Machine Images (AMIs) within AWS EC2. This table enables you, as a system administrator or DevOps engineer, to query shared AMI-specific details, including image ID, creation date, state, and associated tags. You can utilize this table to gather insights on shared AMIs, such as their availability, permissions, and associated metadata. The schema outlines the various attributes of the shared AMI, including the image type, launch permissions, and virtualization type.

Important Notes

  • You must specify an Owner ID or Image ID in the where clause (where owner_id='), (where image_id=').
  • The aws_ec2_ami_shared table can list any image but you must specify owner_id or image_id.
  • If you want to list all of the images in your account then you can use the aws_ec2_ami table.

Examples

Basic info

Explore which AWS EC2 shared AMI resources are owned by a specific user to understand their configurations. This can be useful in auditing access and managing resources across your organization.

select
  name,
  image_id,
  state,
  image_location,
  creation_date,
  public,
  root_device_name
from
  aws_ec2_ami_shared
where
  owner_id = '137112412989';
select
  name,
  image_id,
  state,
  image_location,
  creation_date,
  public,
  root_device_name
from
  aws_ec2_ami_shared
where
  owner_id = '137112412989';

List arm64 AMIs

Explore which Amazon Machine Images (AMIs) with 'arm64' architecture are shared by a specific owner. This can be useful in identifying suitable AMIs for deployment on 'arm64' architecture instances.

select
  name,
  image_id,
  state,
  image_location,
  creation_date,
  public,
  root_device_name
from
  aws_ec2_ami_shared
where
  owner_id = '137112412989'
  and architecture = 'arm64';
select
  name,
  image_id,
  state,
  image_location,
  creation_date,
  public,
  root_device_name
from
  aws_ec2_ami_shared
where
  owner_id = '137112412989'
  and architecture = 'arm64';

List EC2 instances using AMIs owned by a specific AWS account

Explore which EC2 instances are using AMIs owned by a particular AWS account. This is useful to maintain account security and manage resources efficiently.

select
  i.title,
  i.instance_id,
  i.image_id,
  ami.name,
  ami.description,
  ami.platform_details
from
  aws_ec2_instance as i
 join aws_ec2_ami_shared as ami on i.image_id = ami.image_id
where
  ami.owner_id = '137112412989';
select
  i.title,
  i.instance_id,
  i.image_id,
  ami.name,
  ami.description,
  ami.platform_details
from
  aws_ec2_instance as i
 join aws_ec2_ami_shared as ami on i.image_id = ami.image_id
where
  ami.owner_id = '137112412989';
title description
Steampipe Table: aws_ec2_application_load_balancer - Query AWS EC2 Application Load Balancer using SQL
Allows users to query AWS EC2 Application Load Balancer, providing detailed information about each load balancer within an AWS account. This includes its current state, availability zones, security groups, and other important attributes.

Table: aws_ec2_application_load_balancer - Query AWS EC2 Application Load Balancer using SQL

The AWS EC2 Application Load Balancer is a resource within Amazon's Elastic Compute Cloud (EC2) service that automatically distributes incoming application traffic across multiple targets, such as EC2 instances, in multiple Availability Zones. This enhances the fault tolerance of your applications. The load balancer serves as a single point of contact for clients, which increases the availability of your application.

Table Usage Guide

The aws_ec2_application_load_balancer table in Steampipe allows you to gain insights into the Application Load Balancers within your AWS EC2 service. The table provides detailed information about each Application Load Balancer, including its current state, associated security groups, availability zones, type, scheme, and other important attributes. You can use this table to query load balancer-specific details, monitor the health of the load balancers, assess load balancing configurations, and much more. The schema outlines various attributes of the Application Load Balancer, such as the ARN, DNS name, canonical hosted zone ID, and creation date, among others.

Examples

Security group attached to the application load balancers

Explore which security groups are linked to your application load balancers, enabling you to assess potential vulnerabilities and ensure optimal security configurations. This can be particularly useful for identifying security loopholes and reinforcing your system's defenses.

select
  name,
  jsonb_array_elements_text(security_groups) as attached_security_group
from
  aws_ec2_application_load_balancer;
select
  name,
  json_extract(json_each.value, '$') as attached_security_group
from
  aws_ec2_application_load_balancer,
  json_each(security_groups);

Availability zone information

Discover the segments that provide insights into the availability zones of your AWS EC2 application load balancer. This can be particularly useful for understanding your load balancer's distribution and identifying potential areas for improvement or troubleshooting.

select
  name,
  az ->> 'LoadBalancerAddresses' as load_balancer_addresses,
  az ->> 'OutpostId' as outpost_id,
  az ->> 'SubnetId' as subnet_id,
  az ->> 'ZoneName' as zone_name
from
  aws_ec2_application_load_balancer
  cross join jsonb_array_elements(availability_zones) as az;
select
  name,
  json_extract(az.value, '$.LoadBalancerAddresses') as load_balancer_addresses,
  json_extract(az.value, '$.OutpostId') as outpost_id,
  json_extract(az.value, '$.SubnetId') as subnet_id,
  json_extract(az.value, '$.ZoneName') as zone_name
from
  aws_ec2_application_load_balancer,
  json_each(availability_zones) as az;

List of application load balancers whose availability zone count is less than 1

Explore which application load balancers are potentially at risk due to being located in less than two availability zones. This is useful for identifying weak points in your infrastructure and improving system resilience.

select
  name,
  count(az ->> 'ZoneName') < 2 as zone_count_1
from
  aws_ec2_application_load_balancer
  cross join jsonb_array_elements(availability_zones) as az
group by
  name;
select
  name,
  count(json_extract(az.value, '$.ZoneName')) < 2 as zone_count_1
from
  aws_ec2_application_load_balancer,
  json_each(availability_zones) as az
group by
  name;

List of application load balancers whose logging is not enabled

Identify instances where application load balancers in your AWS EC2 environment have their logging feature disabled. This is useful for maintaining security and compliance by ensuring all load balancers are properly recording activity.

select
  name,
  lb ->> 'Key' as logging_key,
  lb ->> 'Value' as logging_value
from
  aws_ec2_application_load_balancer
  cross join jsonb_array_elements(load_balancer_attributes) as lb
where
  lb ->> 'Key' = 'access_logs.s3.enabled'
  and lb ->> 'Value' = 'false';
select
  name,
  json_extract(lb.value, '$.Key') as logging_key,
  json_extract(lb.value, '$.Value') as logging_value
from
  aws_ec2_application_load_balancer,
  json_each(load_balancer_attributes) as lb
where
  json_extract(lb.value, '$.Key') = 'access_logs.s3.enabled'
  and json_extract(lb.value, '$.Value') = 'false';

List of application load balancers whose deletion protection is not enabled

Identify instances where application load balancers are not safeguarded against unintended deletion. This information can be useful in ensuring system resilience and minimizing service disruptions.

select
  name,
  lb ->> 'Key' as deletion_protection_key,
  lb ->> 'Value' as deletion_protection_value
from
  aws_ec2_application_load_balancer
  cross join jsonb_array_elements(load_balancer_attributes) as lb
where
  lb ->> 'Key' = 'deletion_protection.enabled'
  and lb ->> 'Value' = 'false';
select
  name,
  json_extract(lb.value, '$.Key') as deletion_protection_key,
  json_extract(lb.value, '$.Value') as deletion_protection_value
from
  aws_ec2_application_load_balancer,
  json_each(load_balancer_attributes) as lb
where
  json_extract(lb.value, '$.Key') = 'deletion_protection.enabled'
  and json_extract(lb.value, '$.Value') = 'false';
title description
Steampipe Table: aws_ec2_application_load_balancer_metric_request_count - Query AWS EC2 Application Load Balancer Metrics using SQL
Allows users to query AWS EC2 Application Load Balancer Metrics, specifically the request count.

Table: aws_ec2_application_load_balancer_metric_request_count - Query AWS EC2 Application Load Balancer Metrics using SQL

The AWS EC2 Application Load Balancer is a component of the Elastic Load Balancing service that automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your application traffic in a single Availability Zone or across multiple Availability Zones. The Application Load Balancer offers two types of load balancer that cater to specific needs - Classic Load Balancer (CLB) for simple load balancing across multiple Amazon EC2 instances and Application Load Balancer (ALB) for applications needing advanced routing capabilities, microservices, and container-based architectures.

Table Usage Guide

The aws_ec2_application_load_balancer_metric_request_count table in Steampipe provides you with information about the request count metrics of Application Load Balancers within Amazon Elastic Compute Cloud (EC2). This table allows you as a DevOps engineer, system administrator, or other technical professional to query specific details about the number of requests processed by your Application Load Balancers. You can utilize this table to gather insights on load balancing performance and to monitor the traffic your applications are receiving. The schema outlines the various attributes of the request count metric, including the load balancer name, namespace, metric name, and dimensions.

The aws_ec2_application_load_balancer_metric_request_count table provides you with metric statistics at 5 min intervals for the most recent 5 days.

Examples

Basic info

Explore the performance of your application load balancers on AWS EC2 by analyzing metrics such as average, maximum, and minimum request counts. This allows you to assess the load on your balancers and make informed decisions about scaling and resource allocation.

select
  name,
  metric_name,
  namespace,
  average,
  maximum,
  minimum,
  sample_count,
  timestamp
from
  aws_ec2_application_load_balancer_metric_request_count
order by
  name,
  timestamp;
select
  name,
  metric_name,
  namespace,
  average,
  maximum,
  minimum,
  sample_count,
  timestamp
from
  aws_ec2_application_load_balancer_metric_request_count
order by
  name,
  timestamp;

Intervals averaging less than 100 net flow count

Gain insights into application load balancer metrics where the average request count is less than 100, to understand the performance and traffic patterns. This can be beneficial for optimizing resource allocation and managing load effectively.

select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  average
  sample_count,
  timestamp
from
  aws_ec2_application_load_balancer_metric_request_count
where
  average < 100
order by
  name,
  timestamp;
select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  average,
  sample_count,
  timestamp
from
  aws_ec2_application_load_balancer_metric_request_count
where
  average < 100
order by
  name,
  timestamp;
title description
Steampipe Table: aws_ec2_application_load_balancer_metric_request_count_daily - Query AWS EC2 Application Load Balancer using SQL
Allows users to query daily request count metrics of the AWS EC2 Application Load Balancer.

Table: aws_ec2_application_load_balancer_metric_request_count_daily - Query AWS EC2 Application Load Balancer using SQL

The AWS EC2 Application Load Balancer is a fully managed service that operates at the application layer, the seventh layer of the Open Systems Interconnection (OSI) model. It performs advanced traffic distribution across multiple targets, such as Amazon EC2 instances, containers, and IP addresses. The service also monitors the health of its registered targets and ensures that it routes traffic only to healthy targets.

Table Usage Guide

The aws_ec2_application_load_balancer_metric_request_count_daily table in Steampipe gives you information about the daily request count metrics of the AWS EC2 Application Load Balancer. You can use this table to query and analyze the number of requests processed by the Application Load Balancer on a daily basis. It allows you, as a DevOps engineer, to monitor the load on the balancer, identify potential spikes in traffic, and plan capacity accordingly. The schema outlines the various attributes of the metrics, including the load balancer name, namespace, metric name, and the timestamp of the metric.

The aws_ec2_application_load_balancer_metric_request_count_daily table provides you with metric statistics at 24 hour intervals for the most recent 1 year.

Examples

Basic info

Explore the performance of your AWS EC2 application load balancers by analyzing daily metrics. This can help you identify patterns, track changes over time, and optimize your load balancing strategy for improved efficiency and performance.

select
  name,
  metric_name,
  namespace,
  average,
  maximum,
  minimum,
  sample_count,
  timestamp
from
  aws_ec2_application_load_balancer_metric_request_count_daily
order by
  name,
  timestamp;
select
  name,
  metric_name,
  namespace,
  average,
  maximum,
  minimum,
  sample_count,
  timestamp
from
  aws_ec2_application_load_balancer_metric_request_count_daily
order by
  name,
  timestamp;

Intervals averaging less than 100 request count

Identify instances where the average daily request count on your AWS EC2 application load balancer is less than 100. This can help you monitor and manage your load balancer's performance and efficiency.

select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  average
  sample_count,
  timestamp
from
  aws_ec2_application_load_balancer_metric_request_count_daily
where
  average < 100
order by
  name,
  timestamp;
select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  average,
  sample_count,
  timestamp
from
  aws_ec2_application_load_balancer_metric_request_count_daily
where
  average < 100
order by
  name,
  timestamp;
title description
Steampipe Table: aws_ec2_autoscaling_group - Query AWS EC2 Auto Scaling Groups using SQL
Allows users to query AWS EC2 Auto Scaling Groups and access detailed information about each group's configuration, instances, policies, and more.

Table: aws_ec2_autoscaling_group - Query AWS EC2 Auto Scaling Groups using SQL

The AWS EC2 Auto Scaling Groups service allows you to ensure that you have the correct number of Amazon EC2 instances available to handle the load for your applications. Auto Scaling Groups contain a collection of EC2 instances that share similar characteristics and are treated as a logical grouping for the purposes of instance scaling and management. This service automatically increases or decreases the number of instances depending on the demand, ensuring optimal performance and cost management.

Table Usage Guide

The aws_ec2_autoscaling_group table in Steampipe provides you with information about Auto Scaling Groups within AWS EC2. This table allows you, as a DevOps engineer, to query group-specific details, including configuration, associated instances, scaling policies, and associated metadata. You can utilize this table to gather insights on groups, such as their desired, minimum and maximum sizes, default cooldown periods, load balancer names, and more. The schema outlines for you the various attributes of the Auto Scaling Group, including the ARN, creation date, health check type and grace period, launch configuration name, and associated tags.

Examples

Basic info

Explore the configuration of your AWS EC2 autoscaling group to understand its operational parameters, such as the default cooldown period and size limitations. This can help you optimize resource allocation and improve cost efficiency in your cloud environment.

select
  name,
  load_balancer_names,
  availability_zones,
  service_linked_role_arn,
  default_cooldown,
  max_size,
  min_size,
  new_instances_protected_from_scale_in
from
  aws_ec2_autoscaling_group;
select
  name,
  load_balancer_names,
  availability_zones,
  service_linked_role_arn,
  default_cooldown,
  max_size,
  min_size,
  new_instances_protected_from_scale_in
from
  aws_ec2_autoscaling_group;

Autoscaling groups with availability zone count less than 2

Identify autoscaling groups that may not be optimally configured for high availability due to having less than two availability zones. This can be useful to improve fault tolerance and ensure uninterrupted service.

select
  name,
  jsonb_array_length(availability_zones) as az_count
from
  aws_ec2_autoscaling_group
where
  jsonb_array_length(availability_zones) < 2;
select
  name,
  json_array_length(availability_zones) as az_count
from
  aws_ec2_autoscaling_group
where
  json_array_length(availability_zones) < 2;

Instances' information attached to the autoscaling group

Explore the health and configuration status of instances within an autoscaling group. This is useful to monitor and manage the scalability and availability of your AWS EC2 resources.

select
  name as autoscaling_group_name,
  ins_detail ->> 'InstanceId' as instance_id,
  ins_detail ->> 'InstanceType' as instance_type,
  ins_detail ->> 'AvailabilityZone' as az,
  ins_detail ->> 'HealthStatus' as health_status,
  ins_detail ->> 'LaunchConfigurationName' as launch_configuration_name,
  ins_detail -> 'LaunchTemplate' ->> 'LaunchTemplateName' as launch_template_name,
  ins_detail -> 'LaunchTemplate' ->> 'Version' as launch_template_version,
  ins_detail ->> 'ProtectedFromScaleIn' as protected_from_scale_in
from
  aws_ec2_autoscaling_group,
  jsonb_array_elements(instances) as ins_detail;
select
  name as autoscaling_group_name,
  json_extract(ins_detail, '$.InstanceId') as instance_id,
  json_extract(ins_detail, '$.InstanceType') as instance_type,
  json_extract(ins_detail, '$.AvailabilityZone') as az,
  json_extract(ins_detail, '$.HealthStatus') as health_status,
  json_extract(ins_detail, '$.LaunchConfigurationName') as launch_configuration_name,
  json_extract(ins_detail, '$.LaunchTemplate.LaunchTemplateName') as launch_template_name,
  json_extract(ins_detail, '$.LaunchTemplate.Version') as launch_template_version,
  json_extract(ins_detail, '$.ProtectedFromScaleIn') as protected_from_scale_in
from
  aws_ec2_autoscaling_group,
  json_each(instances) as ins_detail;

Auto scaling group health check info

Explore the health check settings of your auto scaling groups to understand their operational readiness and grace periods. This can help you assess the resilience of your system and plan for contingencies.

select
  name,
  health_check_type,
  health_check_grace_period
from
  aws_ec2_autoscaling_group;
select
  name,
  health_check_type,
  health_check_grace_period
from
  aws_ec2_autoscaling_group;
title description
Steampipe Table: aws_ec2_capacity_reservation - Query AWS EC2 Capacity Reservations using SQL
Allows users to query AWS EC2 Capacity Reservations to provide information about the reservations within AWS Elastic Compute Cloud (EC2).

Table: aws_ec2_capacity_reservation - Query AWS EC2 Capacity Reservations using SQL

An AWS EC2 Capacity Reservation ensures that you have reserved capacity for your Amazon EC2 instances in a specific Availability Zone for any duration. This capacity reservation helps to reduce the risks of insufficient capacity for launching instances into an Availability Zone, providing predictable instance launch times. It's a useful tool for capacity planning and managing costs, particularly for applications with predictable peaks in demand.

Table Usage Guide

The aws_ec2_capacity_reservation table in Steampipe provides you with information about Capacity Reservations within AWS Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query reservation-specific details, including reservation ID, reservation ARN, state, instance type, and associated metadata. You can utilize this table to gather insights on reservations, such as reservations per availability zone, reservations per instance type, and more. The schema outlines for you the various attributes of the EC2 Capacity Reservation, including the reservation ID, creation date, instance count, and associated tags.

Examples

Basic info

Identify instances where Amazon EC2 capacity reservations are made, gaining insights into the type of instances reserved and their current state. This can help in efficiently managing resources and understanding reservation patterns.

select
  capacity_reservation_id,
  capacity_reservation_arn,
  instance_type,
  state
from
  aws_ec2_capacity_reservation;
select
  capacity_reservation_id,
  capacity_reservation_arn,
  instance_type,
  state
from
  aws_ec2_capacity_reservation;

List EC2 expired capacity reservations

Identify instances where Amazon EC2 capacity reservations have expired. This information can be useful in managing resources and potentially freeing up unused capacity.

select
  capacity_reservation_id,
  capacity_reservation_arn,
  instance_type,
  state
from
  aws_ec2_capacity_reservation
where
  state = 'expired';
select
  capacity_reservation_id,
  capacity_reservation_arn,
  instance_type,
  state
from
  aws_ec2_capacity_reservation
where
  state = 'expired';

Get EC2 capacity reservation by ID

Determine the status and type of a specific EC2 capacity reservation in AWS, which can be useful for managing and optimizing resource allocation.

select
  capacity_reservation_id,
  capacity_reservation_arn,
  instance_type,
  state
from
  aws_ec2_capacity_reservation
where
  capacity_reservation_id = 'cr-0b30935e9fc2da81e';
select
  capacity_reservation_id,
  capacity_reservation_arn,
  instance_type,
  state
from
  aws_ec2_capacity_reservation
where
  capacity_reservation_id = 'cr-0b30935e9fc2da81e';
title description
Steampipe Table: aws_ec2_classic_load_balancer - Query AWS EC2 Classic Load Balancer using SQL
Allows users to query Classic Load Balancers within Amazon EC2.

Table: aws_ec2_classic_load_balancer - Query AWS EC2 Classic Load Balancer using SQL

The AWS EC2 Classic Load Balancer automatically distributes incoming application traffic across multiple Amazon EC2 instances in the cloud. It enables you to achieve greater levels of fault tolerance in your applications, seamlessly providing the required amount of load balancing capacity needed to distribute application traffic. This service offers a highly available, scalable, and predictable performance to distribute the workload evenly to the backend servers.

Table Usage Guide

The aws_ec2_classic_load_balancer table in Steampipe provides you with information about Classic Load Balancers within Amazon Elastic Compute Cloud (EC2). This table allows you, as a cloud engineer, developer, or administrator, to query load balancer-specific details, including its availability zones, security groups, backend server descriptions, and listener descriptions. You can utilize this table to gather insights on load balancers, such as their configurations, attached instances, health checks, and more. The schema outlines the various attributes of the Classic Load Balancer for you, including the load balancer name, DNS name, created time, and associated tags.

Examples

Instances associated with classic load balancers

Identify the instances that are linked with classic load balancers to effectively manage and balance network traffic.

select
  name,
  instances
from
  aws_ec2_classic_load_balancer;
select
  name,
  instances
from
  aws_ec2_classic_load_balancer;

List of classic load balancers whose logging is not enabled

Determine the areas in which classic load balancers are operating without logging enabled. This is useful for identifying potential security gaps, as logging provides a record of all requests handled by the load balancer.

select
  name,
  access_log_enabled
from
  aws_ec2_classic_load_balancer
where
  access_log_enabled = 'false';
select
  name,
  access_log_enabled
from
  aws_ec2_classic_load_balancer
where
  access_log_enabled = 'false';

Security groups attached to each classic load balancer

Identify the security groups associated with each classic load balancer to ensure proper access control and minimize potential security risks.

select
  name,
  jsonb_array_elements_text(security_groups) as sg
from
  aws_ec2_classic_load_balancer;
select
  name,
  json_extract(json_each.value, '$') as sg
from
  aws_ec2_classic_load_balancer,
  json_each(security_groups);

Classic load balancers listener info

Uncover the details of your classic load balancer's listeners to understand how each instance is configured, including the protocols used, port numbers, SSL certificates, and any associated policy names. This information can help you manage and optimize your load balancing strategy.

select
  name,
  listener_description -> 'Listener' ->> 'InstancePort' as instance_port,
  listener_description -> 'Listener' ->> 'InstanceProtocol' as instance_protocol,
  listener_description -> 'Listener' ->> 'LoadBalancerPort' as load_balancer_port,
  listener_description -> 'Listener' ->> 'Protocol' as load_balancer_protocol,
  listener_description -> 'SSLCertificateId' ->> 'SSLCertificateId' as ssl_certificate,
  listener_description -> 'Listener' ->> 'PolicyNames' as policy_names
from
  aws_ec2_classic_load_balancer
  cross join jsonb_array_elements(listener_descriptions) as listener_description;
select
  name,
  json_extract(listener_description.value, '$.Listener.InstancePort') as instance_port,
  json_extract(listener_description.value, '$.Listener.InstanceProtocol') as instance_protocol,
  json_extract(listener_description.value, '$.Listener.LoadBalancerPort') as load_balancer_port,
  json_extract(listener_description.value, '$.Listener.Protocol') as load_balancer_protocol,
  json_extract(listener_description.value, '$.SSLCertificateId.SSLCertificateId') as ssl_certificate,
  json_extract(listener_description.value, '$.Listener.PolicyNames') as policy_names
from
  aws_ec2_classic_load_balancer,
  json_each(listener_descriptions) as listener_description;

Health check info

Explore the health status of your classic load balancers in AWS EC2 by analyzing parameters such as threshold values, check intervals, and timeouts. This information can be crucial for maintaining optimal server performance and minimizing downtime.

select
  name,
  healthy_threshold,
  health_check_interval,
  health_check_target,
  health_check_timeout,
  unhealthy_threshold
from
  aws_ec2_classic_load_balancer;
select
  name,
  healthy_threshold,
  health_check_interval,
  health_check_target,
  health_check_timeout,
  unhealthy_threshold
from
  aws_ec2_classic_load_balancer;
title description
Steampipe Table: aws_ec2_client_vpn_endpoint - Query AWS EC2 Client VPN Endpoints using SQL
Allows users to query AWS EC2 Client VPN Endpoints to retrieve detailed information about the configuration, status, and associated network details of each endpoint.

Table: aws_ec2_client_vpn_endpoint - Query AWS EC2 Client VPN Endpoints using SQL

The AWS EC2 Client VPN Endpoint is a scalable, end-to-end managed VPN service that enables users to securely access their AWS resources and home network. It provides secure and scalable compute capacity in the AWS Cloud, allowing users to launch virtual servers. With EC2 Client VPN, you can access your resources from any location using an OpenVPN-based VPN client.

Table Usage Guide

The aws_ec2_client_vpn_endpoint table in Steampipe provides you with information about the Client VPN endpoints within AWS Elastic Compute Cloud (EC2). This table enables you, as a DevOps engineer, security analyst, or other IT professional, to query VPN endpoint-specific details, including the endpoint configuration, associated network details, connection logs, and associated metadata. You can utilize this table to gather insights on VPN endpoints, such as the associated VPC, Subnets, Security Groups, and more. The schema outlines the various attributes of the VPN endpoint for you, including the endpoint ID, creation time, DNS server, VPN protocol, and associated tags.

Examples

Basic Info

Explore the status and configuration details of your AWS EC2 Client VPN endpoints to understand their operational state and settings. This can be beneficial for assessing your network's security posture and troubleshooting connectivity issues.

select
  title,
  description,
  status,
  client_vpn_endpoint_id,
  transport_protocol,
  creation_time,
  tags
from
  aws_ec2_client_vpn_endpoint;
select
  title,
  description,
  status,
  client_vpn_endpoint_id,
  transport_protocol,
  creation_time,
  tags
from
  aws_ec2_client_vpn_endpoint;

List client VPN endpoints that are not in available state

Determine the areas in which your client VPN endpoints are not available. This can be useful for troubleshooting connectivity issues or managing network resources.

select
  title,
  status,
  client_vpn_endpoint_id,
  transport_protocol,
  tags
from
  aws_ec2_client_vpn_endpoint
where
  status ->> 'Code' <> 'available';
select
  title,
  status,
  client_vpn_endpoint_id,
  transport_protocol,
  tags
from
  aws_ec2_client_vpn_endpoint
where
  json_extract(status, '$.Code') <> 'available';

List client VPN endpoints created in the last 30 days

Determine the areas in which new client VPN endpoints have been established in the past month. This can help manage and monitor recent network expansions or changes.

select
  title,
  status ->> 'Code' as status,
  client_vpn_endpoint_id,
  transport_protocol,
  tags
from
  aws_ec2_client_vpn_endpoint
where
  creation_time >= now() - interval '30' day;
select
  title,
  json_extract(status, '$.Code') as status,
  client_vpn_endpoint_id,
  transport_protocol,
  tags
from
  aws_ec2_client_vpn_endpoint
where
  creation_time >= datetime('now', '-30 day');

Get the security group and the VPC details of client VPN endpoints

Determine the security setup of recently created VPN endpoints, including their associated security groups and VPC details. This is useful for reviewing and auditing the security configurations of new VPN connections in your network.

select
  title,
  status ->> 'Code' as status,
  client_vpn_endpoint_id,
  security_group_ids,
  vpc_id,
  vpn_port,
  vpn_protocol,
  transport_protocol,
  tags
from
  aws_ec2_client_vpn_endpoint
where
  creation_time >= now() - interval '30' day;
select
  title,
  json_extract(status, '$.Code') as status,
  client_vpn_endpoint_id,
  security_group_ids,
  vpc_id,
  vpn_port,
  vpn_protocol,
  transport_protocol,
  tags
from
  aws_ec2_client_vpn_endpoint
where
  creation_time >= datetime('now', '-30 day');

Get the security group and the VPC details of client VPN endpoints

Explore the security settings and network details of your client VPN endpoints. This can help in assessing the security measures in place and understanding the network configuration, which is crucial for maintaining a secure and efficient VPN service.

select
  title,
  status ->> 'Code' as status,
  client_vpn_endpoint_id,
  security_group_ids,
  vpc_id,
  vpn_port,
  vpn_protocol,
  transport_protocol,
  tags
from
  aws_ec2_client_vpn_endpoint;
select
  title,
  json_extract(status, '$.Code') as status,
  client_vpn_endpoint_id,
  security_group_ids,
  vpc_id,
  vpn_port,
  vpn_protocol,
  transport_protocol,
  tags
from
  aws_ec2_client_vpn_endpoint;

Get the logging configuration of client VPN endpoints

Determine the status of client VPN endpoints and assess whether their logging configurations are enabled. This can be useful for monitoring and troubleshooting VPN connectivity issues.

select
  title,
  status ->> 'Code' as status,
  client_vpn_endpoint_id,
  connection_log_options ->> 'Enabled' as connection_log_options_enabled,
  connection_log_options ->> 'CloudwatchLogGroup' as connection_log_options_cloudwatch_log_group,
  connection_log_options ->> 'CloudwatchLogStream' as connection_log_options_cloudwatch_log_stream,
  tags
from
  aws_ec2_client_vpn_endpoint;
select
  title,
  json_extract(status, '$.Code') as status,
  client_vpn_endpoint_id,
  json_extract(connection_log_options, '$.Enabled') as connection_log_options_enabled,
  json_extract(connection_log_options, '$.CloudwatchLogGroup') as connection_log_options_cloudwatch_log_group,
  json_extract(connection_log_options, '$.CloudwatchLogStream') as connection_log_options_cloudwatch_log_stream,
  tags
from
  aws_ec2_client_vpn_endpoint;

Get the authentication information of client VPN endpoints

This query is used to gain insights into the authentication information of client VPN endpoints within the AWS EC2 service. It's particularly useful for understanding the type of authentication being used and the details of the mutual authentication, which can help in assessing security measures and compliance requirements.

select
  title,
  status ->> 'Code' as status,
  client_vpn_endpoint_id,
  autentication ->> 'Type' as authentication_options_type,
  autentication -> 'MutualAuthentication' ->> 'ClientRootCertificateChain' as authentication_client_root_certificate_chain,
  authentication_options,
  tags
from
  aws_ec2_client_vpn_endpoint,
  jsonb_array_elements(authentication_options) as autentication;
select
  title,
  json_extract(status, '$.Code') as status,
  client_vpn_endpoint_id,
  json_extract(autentication.value, '$.Type') as authentication_options_type,
  json_extract(json_extract(autentication.value, '$.MutualAuthentication'), '$.ClientRootCertificateChain') as authentication_client_root_certificate_chain,
  authentication_options,
  tags
from
  aws_ec2_client_vpn_endpoint,
  json_each(authentication_options) as autentication;
title description
Steampipe Table: aws_ec2_gateway_load_balancer - Query AWS EC2 Gateway Load Balancer using SQL
Allows users to query AWS EC2 Gateway Load Balancer details, including its configuration, state, type, and associated tags.

Table: aws_ec2_gateway_load_balancer - Query AWS EC2 Gateway Load Balancer using SQL

The AWS EC2 Gateway Load Balancer is a resource that operates at the third layer of the Open Systems Interconnection (OSI) model, the network layer. It is designed to manage, scale, and secure your network traffic in a simple and cost-effective manner. This service provides you with a single point of contact for all network traffic, regardless of the scale, and ensures that it is efficiently distributed across multiple resources.

Table Usage Guide

The aws_ec2_gateway_load_balancer table in Steampipe provides you with information about Gateway Load Balancers within Amazon Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query load balancer-specific details, including its configuration, state, type, and associated tags. You can utilize this table to gather insights on load balancers, such as their availability zones, subnets, and security groups. The schema outlines the various attributes of the Gateway Load Balancer for you, including the load balancer ARN, creation date, DNS name, scheme, and associated tags.

Examples

Basic gateway load balancer info

Determine the areas in which your AWS EC2 gateway load balancer is deployed and its current operational state. This information can help you assess the elements within your network infrastructure and optimize for better performance.

select
  name,
  arn,
  type,
  state_code,
  vpc_id,
  availability_zones
from
  aws_ec2_gateway_load_balancer;

select
  name,
  arn,
  type,
  state_code,
  vpc_id,
  availability_zones
from
  aws_ec2_gateway_load_balancer;

Availability zone information of all the gateway load balancers

Determine the areas in which your gateway load balancers are located and gain insights into their specific settings. This can help you assess your load balancing strategy and optimize resource allocation.

select
  name,
  az ->> 'LoadBalancerAddresses' as load_balancer_addresses,
  az ->> 'OutpostId' as outpost_id,
  az ->> 'SubnetId' as subnet_id,
  az ->> 'ZoneName' as zone_name
from
  aws_ec2_gateway_load_balancer,
  jsonb_array_elements(availability_zones) as az;

select
  name,
  json_extract(az.value, '$.LoadBalancerAddresses') as load_balancer_addresses,
  json_extract(az.value, '$.OutpostId') as outpost_id,
  json_extract(az.value, '$.SubnetId') as subnet_id,
  json_extract(az.value, '$.ZoneName') as zone_name
from
  aws_ec2_gateway_load_balancer,
  json_each(availability_zones) as az;

List of gateway load balancers whose availability zone count is less than 2

Determine the areas in which gateway load balancers may be at risk of service disruption due to having less than two availability zones. This can help in proactive infrastructure planning and risk mitigation.

select
  name,
  count(az ->> 'ZoneName') as zone_count
from
  aws_ec2_gateway_load_balancer,
  jsonb_array_elements(availability_zones) as az
group by
  name
having
  count(az ->> 'ZoneName') < 2;
select
  name,
  count(json_extract(az.value, '$.ZoneName')) as zone_count
from
  aws_ec2_gateway_load_balancer,
  json_each(availability_zones) as az
group by
  name
having
  count(json_extract(az.value, '$.ZoneName')) < 2;

List of gateway load balancers whose deletion protection is not enabled

Identify instances where gateway load balancers do not have deletion protection enabled. This can be useful to ensure the security and longevity of your data by avoiding accidental deletion.

select
  name,
  lb ->> 'Key' as deletion_protection_key,
  lb ->> 'Value' as deletion_protection_value
from
  aws_ec2_gateway_load_balancer,
  jsonb_array_elements(load_balancer_attributes) as lb
where
  lb ->> 'Key' = 'deletion_protection.enabled'
  and lb ->> 'Value' = 'false';
select
  name,
  json_extract(lb.value, '$.Key') as deletion_protection_key,
  json_extract(lb.value, '$.Value') as deletion_protection_value
from
  aws_ec2_gateway_load_balancer,
  json_each(load_balancer_attributes) as lb
where
  json_extract(lb.value, '$.Key') = 'deletion_protection.enabled'
  and json_extract(lb.value, '$.Value') = 'false';

List of gateway load balancers whose load balancing cross zone is enabled

Explore which gateway load balancers have the cross-zone load balancing feature enabled. This is useful in understanding the traffic distribution across multiple zones for better load balancing and increased application availability.

select
  name,
  lb ->> 'Key' as load_balancing_cross_zone_key,
  lb ->> 'Value' as load_balancing_cross_zone_value
from
  aws_ec2_gateway_load_balancer,
  jsonb_array_elements(load_balancer_attributes) as lb
where
  lb ->> 'Key' = 'load_balancing.cross_zone.enabled'
  and lb ->> 'Value' = 'true';
select
  name,
  json_extract(lb.value, '$.Key') as load_balancing_cross_zone_key,
  json_extract(lb.value, '$.Value') as load_balancing_cross_zone_value
from
  aws_ec2_gateway_load_balancer,
  json_each(load_balancer_attributes) as lb
where
  json_extract(lb.value, '$.Key') = 'load_balancing.cross_zone.enabled'
  and json_extract(lb.value, '$.Value') = 'true';

Security group attached to the gateway load balancers

Identify instances where your security groups are linked to your gateway load balancers. This can help you assess your security setup and ensure appropriate measures are in place.

select
  name,
  jsonb_array_elements_text(security_groups) as attached_security_group
from
  aws_ec2_gateway_load_balancer;
select
  name,
  json_extract(json_each.value, '$') as attached_security_group
from
  aws_ec2_gateway_load_balancer,
  json_each(security_groups);

List of gateway load balancer with state other than active

Identify instances where gateway load balancers in AWS EC2 are not in an 'active' state. This is useful to pinpoint potential issues or disruptions in network traffic routing.

select
  name,
  state_code
from
  aws_ec2_gateway_load_balancer
where
 state_code <> 'active';
select
  name,
  state_code
from
  aws_ec2_gateway_load_balancer
where
 state_code != 'active';
title description
Steampipe Table: aws_ec2_instance - Query AWS EC2 Instances using SQL
Allows users to query AWS EC2 Instances for comprehensive data on each instance, including instance type, state, tags, and more.

Table: aws_ec2_instance - Query AWS EC2 Instances using SQL

The AWS EC2 Instance is a virtual server in Amazon's Elastic Compute Cloud (EC2) for running applications on the Amazon Web Services (AWS) infrastructure. It provides scalable computing capacity in the AWS cloud, eliminating the need to invest in hardware up front, so you can develop and deploy applications faster. With EC2, you can launch as many or as few virtual servers as you need, configure security and networking, and manage storage.

Table Usage Guide

The aws_ec2_instance table in Steampipe provides you with information about EC2 Instances within AWS Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query instance-specific details, including instance state, launch time, instance type, and associated metadata. You can utilize this table to gather insights on instances, such as instances with specific tags, instances in a specific state, instances of a specific type, and more. The schema outlines the various attributes of the EC2 instance for you, including the instance ID, instance state, instance type, and associated tags.

Examples

Instance count in each availability zone

Discover the distribution of instances across different availability zones and types within your AWS EC2 service. This helps in understanding load balancing and can aid in optimizing resource utilization.

select
  placement_availability_zone as az,
  instance_type,
  count(*)
from
  aws_ec2_instance
group by
  placement_availability_zone,
  instance_type;
select
  placement_availability_zone as az,
  instance_type,
  count(*)
from
  aws_ec2_instance
group by
  placement_availability_zone,
  instance_type;

List instances whose detailed monitoring is not enabled

Determine the areas in which detailed monitoring is not enabled for your AWS EC2 instances. This is useful for identifying potential blind spots in your system's monitoring coverage.

select
  instance_id,
  monitoring_state
from
  aws_ec2_instance
where
  monitoring_state = 'disabled';
select
  instance_id,
  monitoring_state
from
  aws_ec2_instance
where
  monitoring_state = 'disabled';

Count the number of instances by instance type

Determine the distribution of your virtual servers based on their configurations, allowing you to assess your resource allocation and optimize your infrastructure management strategy.

select
  instance_type,
  count(instance_type) as count
from
  aws_ec2_instance
group by
  instance_type;
select
  instance_type,
  count(instance_type) as count
from
  aws_ec2_instance
group by
  instance_type;

List instances stopped for more than 30 days

Determine the areas in which AWS EC2 instances have been stopped for over 30 days. This can be useful for identifying and managing instances that may be unnecessarily consuming resources or costing money.

select
  instance_id,
  instance_state,
  launch_time,
  state_transition_time
from
  aws_ec2_instance
where
  instance_state = 'stopped'
  and state_transition_time <= (current_date - interval '30' day);
select
  instance_id,
  instance_state,
  launch_time,
  state_transition_time
from
  aws_ec2_instance
where
  instance_state = 'stopped'
  and state_transition_time <= date('now', '-30 day');

List of instances without application tag key

Determine the areas in which EC2 instances are lacking the 'application' tag. This is useful to identify instances that may not be following your organization's tagging strategy, ensuring better resource management and cost tracking.

select
  instance_id,
  tags
from
  aws_ec2_instance
where
  not tags :: JSONB ? 'application';
select
  instance_id,
  tags
from
  aws_ec2_instance
where
  json_extract(tags, '$.application') is null;

Get maintenance options for each instance

Determine the status of each instance's automatic recovery feature to plan for potential maintenance needs. This can help in understanding the instances' resilience and ensure uninterrupted services.

select
  instance_id,
  instance_state,
  launch_time,
  maintenance_options ->> 'AutoRecovery' as auto_recovery
from
  aws_ec2_instance;
select
  instance_id,
  instance_state,
  launch_time,
  json_extract(maintenance_options, '$.AutoRecovery') as auto_recovery
from
  aws_ec2_instance;

Get license details for each instance

Determine the license details associated with each of your instances to better manage and track your licensing agreements. This can help ensure compliance and avoid potential legal issues.

select
  instance_id,
  instance_type,
  instance_state,
  l ->> 'LicenseConfigurationArn' as license_configuration_arn
from
  aws_ec2_instance,
  jsonb_array_elements(licenses) as l;
select
  instance_id,
  instance_type,
  instance_state,
  json_extract(l.value, '$.LicenseConfigurationArn') as license_configuration_arn
from
  aws_ec2_instance,
  json_each(licenses) as l;

Get placement group details for each instance

This query can be used to gain insights into the geographic distribution and configuration of your AWS EC2 instances. It helps in managing resources efficiently by understanding their placement details such as affinity, availability zone, and tenancy.

select
  instance_id,
  instance_state,
  placement_affinity,
  placement_group_id,
  placement_group_name,
  placement_availability_zone,
  placement_host_id,
  placement_host_resource_group_arn,
  placement_partition_number,
  placement_tenancy
from
  aws_ec2_instance;
select
  instance_id,
  instance_state,
  placement_affinity,
  placement_group_id,
  placement_group_name,
  placement_availability_zone,
  placement_host_id,
  placement_host_resource_group_arn,
  placement_partition_number,
  placement_tenancy
from
  aws_ec2_instance;

List of EC2 instances provisioned with undesired(for example t2.large and m3.medium is desired) instance type(s).

Identify instances where EC2 instances have been provisioned with types other than the desired ones, such as t2.large and m3.medium. This can help you manage your resources more effectively by spotting any instances that may not meet your specific needs or standards.

select
  instance_type,
  count(*) as count
from
  aws_ec2_instance
where
  instance_type not in ('t2.large', 'm3.medium')
group by
  instance_type;
select
  instance_type,
  count(*) as count
from
  aws_ec2_instance
where
  instance_type not in ('t2.large', 'm3.medium')
group by
  instance_type;

List EC2 instances having termination protection safety feature enabled

Identify instances where the termination protection safety feature is enabled in EC2 instances. This is beneficial for preventing accidental terminations and ensuring system stability.

select
  instance_id,
  disable_api_termination
from
  aws_ec2_instance
where
  not disable_api_termination;
select
  instance_id,
  disable_api_termination
from
  aws_ec2_instance
where
  disable_api_termination = 0;

Find instances which have default security group attached

Discover the segments that have the default security group attached to them in order to identify potential security risks. This is useful for maintaining optimal security practices and ensuring that instances are not using default settings, which may be more vulnerable.

select
  instance_id,
  sg ->> 'GroupId' as group_id,
  sg ->> 'GroupName' as group_name
from
  aws_ec2_instance
  cross join jsonb_array_elements(security_groups) as sg
where
  sg ->> 'GroupName' = 'default';
select
  instance_id,
  json_extract(sg.value, '$.GroupId') as group_id,
  json_extract(sg.value, '$.GroupName') as group_name
from
  aws_ec2_instance,
  json_each(aws_ec2_instance.security_groups) as sg
where
  json_extract(sg.value, '$.GroupName') = 'default';

List the unencrypted volumes attached to the instances

Identify instances where data storage volumes attached to cloud-based virtual servers are not encrypted. This is useful for enhancing security measures by locating potential vulnerabilities where sensitive data might be exposed.

select
  i.instance_id,
  vols -> 'Ebs' ->> 'VolumeId' as vol_id,
  vol.encrypted
from
  aws_ec2_instance as i
  cross join jsonb_array_elements(block_device_mappings) as vols
  join aws_ebs_volume as vol on vol.volume_id = vols -> 'Ebs' ->> 'VolumeId'
where
  not vol.encrypted;
select
  i.instance_id,
  json_extract(vols.value, '$.Ebs.VolumeId') as vol_id,
  vol.encrypted
from
  aws_ec2_instance as i,
  json_each(i.block_device_mappings) as vols
  join aws_ebs_volume as vol on vol.volume_id = json_extract(vols.value, '$.Ebs.VolumeId')
where
  not vol.encrypted;

List instances with secrets in user data

Discover the instances that might contain sensitive information in their user data. This is beneficial in identifying potential security risks and ensuring data privacy compliance.

select
  instance_id,
  user_data
from
  aws_ec2_instance
where
  user_data like any (array ['%pass%', '%secret%','%token%','%key%'])
  or user_data ~ '(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[@$!%*?&])[A-Za-z\d@$!%*?&]';
select
  instance_id,
  user_data
from
  aws_ec2_instance
where
  user_data like '%pass%'
  or user_data like '%secret%'
  or user_data like '%token%'
  or user_data like '%key%'
  or (user_data REGEXP '[a-z]' and user_data REGEXP '[A-Z]' and user_data REGEXP '\d' and user_data REGEXP '[@$!%*?&]');

Get launch template data for the instances

Analyze the settings to understand the configuration and specifications of your cloud instances. This can help you assess the elements within your instances, such as network interfaces and capacity reservation specifications, which can be useful for optimizing resource usage and management.

select
  instance_id,
  launch_template_data -> 'ImageId' as image_id,
  launch_template_data -> 'Placement' as placement,
  launch_template_data -> 'DisableApiStop' as disable_api_stop,
  launch_template_data -> 'MetadataOptions' as metadata_options,
  launch_template_data -> 'NetworkInterfaces' as network_interfaces,
  launch_template_data -> 'BlockDeviceMappings' as block_device_mappings,
  launch_template_data -> 'CapacityReservationSpecification' as capacity_reservation_specification
from
  aws_ec2_instance;
select
  instance_id,
  json_extract(launch_template_data, '$.ImageId') as image_id,
  json_extract(launch_template_data, '$.Placement') as placement,
  json_extract(launch_template_data, '$.DisableApiStop') as disable_api_stop,
  json_extract(launch_template_data, '$.MetadataOptions') as metadata_options,
  json_extract(launch_template_data, '$.NetworkInterfaces') as network_interfaces,
  json_extract(launch_template_data, '$.BlockDeviceMappings') as block_device_mappings,
  json_extract(launch_template_data, '$.CapacityReservationSpecification') as capacity_reservation_specification
from
  aws_ec2_instance;

Get subnet details for each instance

Explore the association between instances and subnets in your AWS environment. This can be helpful in understanding how resources are distributed and for planning infrastructure changes or improvements.

select 
  i.instance_id, 
  i.vpc_id, 
  i.subnet_id, 
  s.tags ->> 'Name' as subnet_name
from 
  aws_ec2_instance as i, 
  aws_vpc_subnet as s 
where 
  i.subnet_id = s.subnet_id;
select 
  i.instance_id, 
  i.vpc_id, 
  i.subnet_id, 
  json_extract(s.tags, '$.Name') as subnet_name
from 
  aws_ec2_instance as i, 
  aws_vpc_subnet as s 
where 
  i.subnet_id = s.subnet_id;
title description
Steampipe Table: aws_ec2_instance_availability - Query AWS EC2 Instance Availability using SQL
Allows users to query AWS EC2 Instance Availability and retrieve detailed information about the availability of EC2 instances in each AWS region.

Table: aws_ec2_instance_availability - Query AWS EC2 Instance Availability using SQL

The AWS EC2 Instance Availability is a feature that allows you to monitor the operational status of your instances in real-time. It provides information about any scheduled events for your instances and any status checks that have failed. This service is crucial for maintaining the reliability, availability, and performance of your AWS resources and applications on AWS.

Table Usage Guide

The aws_ec2_instance_availability table in Steampipe provides you with information about the availability of AWS EC2 instances in each AWS region. This table allows you, as a DevOps engineer, to query instance-specific details, including instance type, product description, and spot price history. You can utilize this table to gather insights on instance availability, such as the types of instances available in a region, the spot price history of an instance type, and more. The schema outlines the various attributes of the EC2 instance availability for you, including the instance type, product description, timestamp of the spot price history, and the spot price itself.

Examples

List of instance types available in us-east-1 region

Explore the range of instance types accessible in a specific geographic region to optimize resource allocation and cost efficiency. This is particularly useful for businesses seeking to manage their cloud-based resources more effectively.

select
  instance_type,
  location
from
  aws_ec2_instance_availability
where
  location = 'us-east-1';
select
  instance_type,
  location
from
  aws_ec2_instance_availability
where
  location = 'us-east-1';

Check if r5.12xlarge instance type available in af-south-1

Determine the availability of a specific instance type in a particular AWS region. This is useful for planning resource allocation and managing infrastructure costs.

select
  instance_type,
  location
from
  aws_ec2_instance_availability
where
  location = 'af-south'
  and instance_type = 'r5.12xlarge';
select
  instance_type,
  location
from
  aws_ec2_instance_availability
where
  location = 'af-south'
  and instance_type = 'r5.12xlarge';
title description
Steampipe Table: aws_ec2_instance_metric_cpu_utilization - Query AWS EC2 Instance Metrics using SQL
Allows users to query EC2 Instance CPU Utilization metrics from AWS CloudWatch.

Table: aws_ec2_instance_metric_cpu_utilization - Query AWS EC2 Instance Metrics using SQL

The AWS EC2 Instance Metrics is a feature of Amazon EC2 (Elastic Compute Cloud) that provides detailed reports on the performance of your EC2 instances. These metrics include CPU utilization, which measures the percentage of total CPU time spent on various tasks within the EC2 instance. By querying these metrics using SQL, you can gain insights into your instance's performance and optimize resource usage.

Table Usage Guide

The aws_ec2_instance_metric_cpu_utilization table in Steampipe provides you with information about CPU utilization metrics of EC2 instances within AWS CloudWatch. This table allows you, as a DevOps engineer, system administrator, or other technical professional, to query CPU-specific details, including the instance's average, maximum, and minimum CPU utilization. You can utilize this table to gather insights on instance performance, such as identifying instances with high CPU utilization, analyzing CPU usage patterns, and more. The schema outlines the various attributes of the EC2 instance CPU utilization metrics for you, including the instance ID, namespace, metric name, and statistics.

Examples

Basic info

Explore which AWS EC2 instances have varying CPU utilization levels and when these fluctuations occur. This information can help identify instances that may require optimization for improved performance and cost efficiency.

select
  instance_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization
order by
  instance_id,
  timestamp;
select
  instance_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization
order by
  instance_id,
  timestamp;

CPU Over 80% average

Determine the areas in which instances of your AWS EC2 service are experiencing high CPU utilization, specifically where the average CPU usage exceeds 80%. This can help in identifying potential performance issues and optimize resource allocation.

select
  instance_id,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization
where average > 80
order by
  instance_id,
  timestamp;
select
  instance_id,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization
where average > 80
order by
  instance_id,
  timestamp;
title description
Steampipe Table: aws_ec2_instance_metric_cpu_utilization_daily - Query AWS EC2 Instances using SQL
Allows users to query daily CPU utilization metrics of AWS EC2 instances.

Table: aws_ec2_instance_metric_cpu_utilization_daily - Query AWS EC2 Instances using SQL

The AWS EC2 Instance is a virtual server in Amazon's Elastic Compute Cloud (EC2) for running applications on the Amazon Web Services (AWS) infrastructure. It provides scalable computing capacity in the AWS Cloud, allowing developers to launch as many or as few virtual servers as needed. The CPU Utilization metric provides the percentage of CPU utilization for an EC2 instance, averaged over a daily period.

Table Usage Guide

The aws_ec2_instance_metric_cpu_utilization_daily table in Steampipe provides you with information about the daily CPU utilization metrics of AWS EC2 instances. This table allows you, as a DevOps engineer, to query instance-specific details, including average, maximum, and minimum CPU utilization, and associated timestamps. You can utilize this table to gather insights on CPU usage patterns over time, such as instances with high or low CPU utilization, instances with abnormal CPU usage patterns, and more. The schema outlines the various attributes of the CPU utilization metrics for you, including the instance ID, timestamp, average CPU utilization, maximum CPU utilization, and minimum CPU utilization.

The aws_ec2_instance_metric_cpu_utilization_daily table provides you with metric statistics at 24 hour intervals for the last year.

Examples

Basic info

Determine the areas in which daily CPU utilization of AWS EC2 instances fluctuates, allowing for more effective resource management and cost optimization.

select
  instance_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_daily
order by
  instance_id,
  timestamp;
select
  instance_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_daily
order by
  instance_id,
  timestamp;

CPU Over 80% average

Determine the areas in which your AWS EC2 instances are utilizing more than 80% of their CPU capacity on average. This can help in identifying potential performance bottlenecks and planning for capacity upgrades.

select
  instance_id,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_daily
where average > 80
order by
  instance_id,
  timestamp;
select
  instance_id,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_daily
where average > 80
order by
  instance_id,
  timestamp;

CPU daily average < 1%

Determine the areas in which your AWS EC2 instances are underutilized, specifically where daily average CPU usage is less than 1%. This can help identify potential cost savings by downsizing or eliminating these underused resources.

select
  instance_id,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_daily
where average < 1
order by
  instance_id,
  timestamp;
select
  instance_id,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_daily
where average < 1
order by
  instance_id,
  timestamp;
title description
Steampipe Table: aws_ec2_instance_metric_cpu_utilization_hourly - Query AWS EC2 Instance Metrics using SQL
Allows users to query AWS EC2 Instance CPU Utilization metrics on an hourly basis.

Table: aws_ec2_instance_metric_cpu_utilization_hourly - Query AWS EC2 Instance Metrics using SQL

The AWS EC2 Instance Metrics service provides insights into the performance of your EC2 instances. It allows you to monitor CPU utilization in an hourly manner using SQL queries. This can assist in identifying performance bottlenecks and optimizing resource usage for your EC2 instances.

Table Usage Guide

The aws_ec2_instance_metric_cpu_utilization_hourly table in Steampipe provides you with information about the CPU Utilization metrics of EC2 instances in AWS. This table enables you as a DevOps engineer, system administrator, or other technical professional to query CPU utilization metrics on an hourly basis. This can be useful for you in monitoring system performance, identifying potential bottlenecks, and planning for capacity. The schema outlines the various attributes of the EC2 instance CPU utilization metrics for you, including the instance ID, timestamp, maximum, minimum, and average CPU utilization.

The aws_ec2_instance_metric_cpu_utilization_hourly table provides you with metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Explore which instances in your AWS EC2 service are experiencing fluctuating CPU utilization over time. This allows you to pinpoint specific locations where performance optimization may be needed to improve overall efficiency.

select
  instance_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_hourly
order by
  instance_id,
  timestamp;
select
  instance_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_hourly
order by
  instance_id,
  timestamp;

CPU Over 80% average

Identify instances where EC2 instances have an average CPU utilization exceeding 80%. This can help in monitoring and optimizing resource usage, ensuring efficient performance of your AWS infrastructure.

select
  instance_id,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_hourly
where average > 80
order by
  instance_id,
  timestamp;
select
  instance_id,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_hourly
where average > 80
order by
  instance_id,
  timestamp;

CPU hourly average < 1%

Determine the areas in which your AWS EC2 instances' CPU utilization is less than 1% on average per hour. This can help you identify underutilized resources and optimize your AWS usage for cost-effectiveness.

select
  instance_id,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_hourly
where average < 1
order by
  instance_id,
  timestamp;
select
  instance_id,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_ec2_instance_metric_cpu_utilization_hourly
where average < 1
order by
  instance_id,
  timestamp;
title description
Steampipe Table: aws_ec2_instance_type - Query AWS EC2 Instance Type using SQL
Allows users to query AWS EC2 Instance Type data, including details about instance type name, current generation, vCPU, memory, storage, and network performance.

Table: aws_ec2_instance_type - Query AWS EC2 Instance Type using SQL

The AWS EC2 Instance Type is a component of Amazon's Elastic Compute Cloud (EC2), which provides scalable computing capacity in the Amazon Web Services (AWS) cloud. It defines the hardware of the host computer used for the instance. Different instance types offer varying combinations of CPU, memory, storage, and networking capacity, giving you the flexibility to choose the appropriate mix of resources for your applications.

Table Usage Guide

The aws_ec2_instance_type table in Steampipe provides you with information about EC2 instance types within AWS Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query instance type-specific details, including its name, current generation, vCPU, memory, storage, and network performance. You can utilize this table to gather insights on instance types, such as their capabilities, performance, and associated metadata. The schema outlines the various attributes of the EC2 instance type for you, including the instance type, current generation, vCPU, memory, storage, and network performance.

Examples

List of instance types which supports dedicated host

Explore which AWS EC2 instance types support a dedicated host. This is useful for identifying the types of instances that can be used for tasks requiring dedicated resources, enhancing performance and security.

select
  instance_type,
  dedicated_hosts_supported
from
  aws_ec2_instance_type
where
  dedicated_hosts_supported;
select
  instance_type,
  dedicated_hosts_supported
from
  aws_ec2_instance_type
where
  dedicated_hosts_supported = 1;

List of instance types which does not support auto recovery

Discover the segments of AWS EC2 instances that do not support auto-recovery. This is useful to identify potential risk areas in your infrastructure that may require manual intervention in case of system failures.

select
  instance_type,
  auto_recovery_supported
from
  aws_ec2_instance_type
where
  not auto_recovery_supported;
select
  instance_type,
  auto_recovery_supported
from
  aws_ec2_instance_type
where
  auto_recovery_supported = 0;

List of instance types which have more than 24 cores

Determine the areas in which AWS EC2 instance types support dedicated hosts and have more than 24 default cores. This can be useful for identifying high-performance instances suitable for resource-intensive applications.

select
  instance_type,
  dedicated_hosts_supported,
  v_cpu_info -> 'DefaultCores' as default_cores,
  v_cpu_info -> 'DefaultThreadsPerCore' as default_threads_per_core,
  v_cpu_info -> 'DefaultVCpus' as default_vcpus,
  v_cpu_info -> 'ValidCores' as valid_cores,
  v_cpu_info -> 'ValidThreadsPerCore' as valid_threads_per_core
from
  aws_ec2_instance_type
where
  v_cpu_info ->> 'DefaultCores' > '24';
select
  instance_type,
  dedicated_hosts_supported,
  json_extract(v_cpu_info, '$.DefaultCores') as default_cores,
  json_extract(v_cpu_info, '$.DefaultThreadsPerCore') as default_threads_per_core,
  json_extract(v_cpu_info, '$.DefaultVCpus') as default_vcpus,
  json_extract(v_cpu_info, '$.ValidCores') as valid_cores,
  json_extract(v_cpu_info, '$.ValidThreadsPerCore') as valid_threads_per_core
from
  aws_ec2_instance_type
where
  CAST(json_extract(v_cpu_info, '$.DefaultCores') AS INTEGER) > 24;

List of instance types which does not support encryption to root volume

Identify instances where the type of Amazon EC2 instance does not support encryption for the root volume. This is beneficial for maintaining security standards and ensuring sensitive data is adequately protected.

select
  instance_type,
  ebs_info ->> 'EncryptionSupport' as encryption_support
from
  aws_ec2_instance_type
where
  ebs_info ->> 'EncryptionSupport' = 'unsupported';
select
  instance_type,
  json_extract(ebs_info, '$.EncryptionSupport') as encryption_support
from
  aws_ec2_instance_type
where
  json_extract(ebs_info, '$.EncryptionSupport') = 'unsupported';

List of instance types eligible for free tier

Determine the types of instances that are eligible for the free tier in AWS EC2, aiding in cost-efficient resource allocation.

select
  instance_type,
  free_tier_eligible
from
  aws_ec2_instance_type
where
  free_tier_eligible;
select
  instance_type,
  free_tier_eligible
from
  aws_ec2_instance_type
where
  free_tier_eligible = 1;
title description
Steampipe Table: aws_ec2_key_pair - Query AWS EC2 Key Pairs using SQL
Allows users to query AWS EC2 Key Pairs, providing information about key pairs which are used to securely log into EC2 instances.

Table: aws_ec2_key_pair - Query AWS EC2 Key Pairs using SQL

The AWS EC2 Key Pair is a security feature utilized within Amazon's Elastic Compute Cloud (EC2). It provides a simple, secure way to log into your instances using SSH. The key pair is composed of a public key that AWS stores, and a private key file that you store, enabling an encrypted connection to your instance.

Table Usage Guide

The aws_ec2_key_pair table in Steampipe provides you with information about Key Pairs within AWS EC2 (Elastic Compute Cloud). This table allows you, as a DevOps engineer, security team member, or system administrator, to query key pair-specific details, including key fingerprints, key material, and associated tags. You can utilize this table to gather insights on key pairs, such as verifying key fingerprints, checking the existence of specific key pairs, and more. The schema outlines the various attributes of the EC2 key pair for you, including the key pair name, key pair ID, key type, public key, and associated tags.

Examples

Basic keypair info

Analyze the settings to understand the distribution of your EC2 key pairs across various regions. This can help in managing your AWS resources efficiently and ensuring balanced utilization.

select
  key_name,
  key_pair_id,
  region
from
  aws_ec2_key_pair;
select
  key_name,
  key_pair_id,
  region
from
  aws_ec2_key_pair;

List of keypairs without owner tag key

Identify instances where AWS EC2 key pairs are not tagged with an owner. This is useful for maintaining efficient tag management and ensuring accountability for key pair usage.

select
  key_name,
  tags
from
  aws_ec2_key_pair
where
  not tags :: JSONB ? 'owner';
select
  key_name,
  tags
from
  aws_ec2_key_pair
where
  json_extract(tags, '$.owner') IS NULL;
title description
Steampipe Table: aws_ec2_launch_configuration - Query AWS EC2 Launch Configurations using SQL
Allows users to query AWS EC2 Launch Configurations to gain insights into their configurations, metadata, and associated instances.

Table: aws_ec2_launch_configuration - Query AWS EC2 Launch Configurations using SQL

The AWS EC2 Launch Configuration is a template that an AWS Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances such as the ID of the Amazon Machine Image (AMI), the instance type, a key pair, security groups, and block device mapping. This information allows EC2 instances to be consistently launched with your chosen configurations.

Table Usage Guide

The aws_ec2_launch_configuration table in Steampipe provides you with information about EC2 Launch Configurations within AWS Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query configuration-specific details, including associated instances, security groups, and metadata. You can utilize this table to gather insights on launch configurations, such as the instance type specified, kernel id, ram disk id, and more. The schema outlines the various attributes of the EC2 Launch Configuration for you, including the launch configuration name, creation date, image id, and associated key pairs.

Examples

Basic launch configuration info

Determine the areas in which specific configurations were launched in your AWS EC2 environment. This can help in auditing and optimizing your cloud resources for better performance and cost management.

select
  name,
  created_time,
  associate_public_ip_address,
  ebs_optimized,
  image_id,
  instance_monitoring_enabled,
  instance_type,
  key_name
from
  aws_ec2_launch_configuration;
select
  name,
  created_time,
  associate_public_ip_address,
  ebs_optimized,
  image_id,
  instance_monitoring_enabled,
  instance_type,
  key_name
from
  aws_ec2_launch_configuration;

Get IAM role attached to each launch configuration

Identify the specific IAM role attached to each EC2 launch configuration. This can be useful for understanding the permissions each configuration has, helping to ensure security and access control in your AWS environment.

select
  name,
  iam_instance_profile
from
  aws_ec2_launch_configuration;
select
  name,
  iam_instance_profile
from
  aws_ec2_launch_configuration;

List launch configurations with public IPs

Identify the launch configurations that are associated with public IP addresses. This is useful for auditing your AWS EC2 instances to ensure secure and controlled access.

select
  name,
  associate_public_ip_address
from
  aws_ec2_launch_configuration
where
  associate_public_ip_address;
select
  name,
  associate_public_ip_address
from
  aws_ec2_launch_configuration
where
  associate_public_ip_address = 1;

Security groups attached to each launch configuration

Determine the areas in which security groups are linked to each launch configuration in your AWS EC2 instances. This allows for better management of security configurations and ensures appropriate security measures are in place.

select
  name,
  jsonb_array_elements_text(security_groups) as security_groups
from
  aws_ec2_launch_configuration;
select
  name,
  json_extract(json_each.value, '$') as security_groups
from
  aws_ec2_launch_configuration,
  json_each(security_groups);

List launch configurations with secrets in user data

Discover the segments that contain sensitive information within the launch configurations, such as passwords or tokens. This query is particularly useful in identifying potential security risks and ensuring data protection standards are met.

select
  name,
  user_data
from
  aws_ec2_launch_configuration
where
  user_data like any (array ['%pass%', '%secret%','%token%','%key%'])
  or user_data ~ '(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[@$!%*?&])[A-Za-z\d@$!%*?&]';
select
  name,
  user_data
from
  aws_ec2_launch_configuration
where
  user_data like '%pass%'
  or user_data like '%secret%'
  or user_data like '%token%'
  or user_data like '%key%'
  or (
    user_data GLOB '*[a-z]*' 
    and user_data GLOB '*[A-Z]*' 
    and user_data GLOB '*[0-9]*' 
    and user_data GLOB '*[@$!%*?&]*'
  );
title description
Steampipe Table: aws_ec2_launch_template - Query AWS EC2 Launch Templates using SQL
Allows users to query AWS EC2 Launch Templates to retrieve detailed information, including the associated AMI, instance type, key pair, security groups, and user data.

Table: aws_ec2_launch_template - Query AWS EC2 Launch Templates using SQL

The AWS EC2 Launch Template is a resource within the Amazon Elastic Compute Cloud (EC2) service. It allows you to save launch parameters within Amazon EC2 so you can quickly launch instances with those settings. This ensures consistency across instances and reduces the manual effort required to configure individual instances.

Table Usage Guide

The aws_ec2_launch_template table in Steampipe provides you with information about EC2 Launch Templates within AWS Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query template-specific details, including instance type, key pair, security groups, and user data. You can utilize this table to gather insights on templates, such as associated AMIs, security configurations, instance configurations, and more. The schema outlines the various attributes of the EC2 Launch Template for you, including the template ID, creation date, default version, and associated tags.

Examples

Basic info

Explore which AWS EC2 launch templates have been created, by whom, and when. This can help in understanding the evolution of your infrastructure, including the original and most recent versions of each template.

select
  launch_template_name,
  launch_template_id,
  created_time,
  created_by,
  default_version_number,
  latest_version_number
from
  aws_ec2_launch_template;
select
  launch_template_name,
  launch_template_id,
  created_time,
  created_by,
  default_version_number,
  latest_version_number
from
  aws_ec2_launch_template;

List launch templates created by a user

Discover the segments that include launch templates created by a specific user in AWS EC2. This is beneficial for understanding and managing user-specific resources within the cloud infrastructure.

select
  launch_template_name,
  launch_template_id,
  create_time,
  created_by
from
  aws_ec2_launch_template
where
  created_by like '%turbot';
select
  launch_template_name,
  launch_template_id,
  create_time,
  created_by
from
  aws_ec2_launch_template
where
  created_by like '%turbot';

List launch templates created in the last 30 days

Identify recently created launch templates within the past month. This is useful for monitoring new additions and ensuring proper configuration and usage.

select
  launch_template_name,
  launch_template_id,
  create_time
from
  aws_ec2_launch_template
where
  create_time >= now() - interval '30' day;
select
  launch_template_name,
  launch_template_id,
  create_time
from
  aws_ec2_launch_template
where
  create_time >= datetime('now', '-30 days');
title description
Steampipe Table: aws_ec2_launch_template_version - Query AWS EC2 Launch Template Versions using SQL
Allows users to query AWS EC2 Launch Template Versions, providing details about each version of an Amazon EC2 launch template.

Table: aws_ec2_launch_template_version - Query AWS EC2 Launch Template Versions using SQL

An AWS EC2 Launch Template Version is a configuration template that helps you avoid the trouble of specifying the same instance configuration details every time you launch an instance. It includes information like the ID of the Amazon Machine Image (AMI), the instance type, key pair, security groups, and the other parameters you typically provide when launching an instance. By using versions of launch templates, you can create different configurations changeable over time without altering the original template.

Table Usage Guide

The aws_ec2_launch_template_version table in Steampipe provides you with information about each version of an Amazon EC2 launch template. This table allows you, as a DevOps engineer, system administrator, or other IT professional, to query version-specific details, including the template ID, version number, and associated metadata. You can utilize this table to gather insights on EC2 launch template versions, such as tracking changes between versions, verifying configuration details, and more. The schema outlines the various attributes of the EC2 launch template version for you, including the template ID, version number, creation date, and associated tags.

Examples

Basic info

Explore which EC2 launch templates are being used in your AWS environment, including details such as who created them and the default versions. This can help you gain insights into your AWS EC2 usage patterns and streamline your resource management.

select
  launch_template_name,
  launch_template_id,
  created_by,
  default_version,
  version_description,
  version_number
from
  aws_ec2_launch_template_version;
select
  launch_template_name,
  launch_template_id,
  created_by,
  default_version,
  version_description,
  version_number
from
  aws_ec2_launch_template_version;

List launch template versions created by a user

Determine the instances where a specific user has created versions of a launch template. This can be useful for understanding user activity and maintaining security and consistency within your AWS EC2 environment.

select
  launch_template_name,
  launch_template_id,
  create_time,
  created_by,
  version_description,
  version_number
from
  aws_ec2_launch_template_version
where
  created_by like '%turbot';
select
  launch_template_name,
  launch_template_id,
  create_time,
  created_by,
  version_description,
  version_number
from
  aws_ec2_launch_template_version
where
  created_by like '%turbot';

List launch template versions created in the last 30 days

Explore which launch template versions have been created in the past 30 days to maintain a current understanding of your AWS EC2 environment. This query is useful for tracking recent changes and ensuring the latest configurations are being utilized.

select
  launch_template_name,
  launch_template_id,
  create_time,
  default_version,
  version_number
from
  aws_ec2_launch_template_version
where
  create_time >= now() - interval '30' day;
select
  launch_template_name,
  launch_template_id,
  create_time,
  default_version,
  version_number
from
  aws_ec2_launch_template_version
where
  create_time >= datetime('now', '-30 day');

List default version launch templates

Determine the default versions of your launch templates to understand which configurations are set as standard when launching new instances. This can be helpful for maintaining consistency across your deployments.

select
  launch_template_name,
  launch_template_id,
  create_time,
  default_version,
  version_number
from
  aws_ec2_launch_template_version
where
  default_version;
select
  launch_template_name,
  launch_template_id,
  create_time,
  default_version,
  version_number
from
  aws_ec2_launch_template_version
where
  default_version = 1;

Count versions by launch template

Assess the elements within each AWS EC2 launch template to understand the total number of versions associated with each. This can be helpful in managing and tracking the evolution of your launch templates.

select
  launch_template_id,
  count(version_number) as number_of_versions
from
  aws_ec2_launch_template_version
group by
  launch_template_id;
select
  launch_template_id,
  count(version_number) as number_of_versions
from
  aws_ec2_launch_template_version
group by
  launch_template_id;

Get launch template data details of each version

Identify instances where detailed information about each launch template version is required. This can be useful for understanding and managing the different settings and configurations associated with each version.

select
  launch_template_name,
  launch_template_id,
  version_number,
  launch_template_data -> 'BlockDeviceMappings' as block_device_mappings,
  launch_template_data -> 'CapacityReservationSpecification' as capacity_reservation_specification,
  launch_template_data -> 'CpuOptions' as cpu_options,
  launch_template_data -> 'CreditSpecification' as credit_specification,
  launch_template_data -> 'DisableApiStop' as disable_api_stop,
  launch_template_data -> 'DisableApiTermination' as disable_api_termination,
  launch_template_data -> 'EbsOptimized' as ebs_optimized,
  launch_template_data -> 'ElasticGpuSpecifications' as elastic_gpu_specifications,
  launch_template_data -> 'ElasticInferenceAccelerators' as elastic_inference_accelerators,
  launch_template_data -> 'EnclaveOptions' as enclave_options,
  launch_template_data -> 'IamInstanceProfile' as iam_instance_profile,
  launch_template_data -> 'ImageId' as image_id,
  launch_template_data -> 'InstanceInitiatedShutdownBehavior' as instance_initiated_shutdown_behavior,
  launch_template_data -> 'InstanceRequirements' as instance_requirements,
  launch_template_data -> 'InstanceType' as instance_type,
  launch_template_data -> 'KernelId' as kernel_id,
  launch_template_data -> 'LicenseSpecifications' as license_specifications,
  launch_template_data -> 'MaintenanceOptions' as maintenance_options,
  launch_template_data -> 'MetadataOptions' as metadata_options,
  launch_template_data -> 'Monitoring' as monitoring,
  launch_template_data -> 'NetworkInterfaces' as network_interfaces,
  launch_template_data -> 'PrivateDnsNameOptions' as private_dns_name_options,
  launch_template_data -> 'RamDiskId' as ram_disk_id,
  launch_template_data -> 'SecurityGroupIds' as security_group_ids,
  launch_template_data -> 'SecurityGroups' as security_groups,
  launch_template_data -> 'TagSpecifications' as tag_specifications,
  launch_template_data -> 'UserData' as user_data
from
  aws_ec2_launch_template_version;
select
  launch_template_name,
  launch_template_id,
  version_number,
  json_extract(launch_template_data, '$.BlockDeviceMappings') as block_device_mappings,
  json_extract(launch_template_data, '$.CapacityReservationSpecification') as capacity_reservation_specification,
  json_extract(launch_template_data, '$.CpuOptions') as cpu_options,
  json_extract(launch_template_data, '$.CreditSpecification') as credit_specification,
  json_extract(launch_template_data, '$.DisableApiStop') as disable_api_stop,
  json_extract(launch_template_data, '$.DisableApiTermination') as disable_api_termination,
  json_extract(launch_template_data, '$.EbsOptimized') as ebs_optimized,
  json_extract(launch_template_data, '$.ElasticGpuSpecifications') as elastic_gpu_specifications,
  json_extract(launch_template_data, '$.ElasticInferenceAccelerators') as elastic_inference_accelerators,
from
  aws_ec2_launch_template_version;

List launch template versions where instance is optimized for Amazon EBS I/O

Determine the versions of launch templates that are optimized for Amazon EBS I/O. This is useful for identifying instances that are designed for high-performance EBS operations.

select
  launch_template_name,
  launch_template_id,
  version_number,
  version_description,
  ebs_optimized
from
  aws_ec2_launch_template_version
where
  ebs_optimized;
select
  launch_template_name,
  launch_template_id,
  version_number,
  version_description,
  ebs_optimized
from
  aws_ec2_launch_template_version
where
  ebs_optimized = 1;

List launch template versions where instance termination is restricted via console, CLI, or API

Determine the areas in which instance termination is restricted for various versions of launch templates. This is useful to ensure that vital instances are safeguarded from accidental termination via console, CLI, or API.

select
  launch_template_name,
  launch_template_id,
  version_number,
  version_description,
  disable_api_termination
from
  aws_ec2_launch_template_version
where
  disable_api_termination;
select
  launch_template_name,
  launch_template_id,
  version_number,
  version_description,
  disable_api_termination
from
  aws_ec2_launch_template_version
where
  disable_api_termination = 1;

List template versions where instance stop protection is enabled

Identify versions of launch templates where the protection against instance stops is activated. This is useful for managing and safeguarding critical instances within your AWS EC2 environment.

select
  launch_template_name,
  launch_template_id,
  version_number,
  disable_api_stop
from
  aws_ec2_launch_template_version
where
  disable_api_stop;
select
  launch_template_name,
  launch_template_id,
  version_number,
  disable_api_stop
from
  aws_ec2_launch_template_version
where
  disable_api_stop = 1;
title description
Steampipe Table: aws_ec2_load_balancer_listener - Query AWS EC2 Load Balancer Listeners using SQL
Allows users to query AWS EC2 Load Balancer Listener data, which provides information about listeners for an Application Load Balancer or Network Load Balancer.

Table: aws_ec2_load_balancer_listener - Query AWS EC2 Load Balancer Listeners using SQL

An AWS EC2 Load Balancer Listener is a component of the AWS Elastic Load Balancing service that checks for connection requests. It is configured with a protocol and port for the front-end (client to load balancer) connections, and a protocol and port for the back-end (load balancer to back-end instance) connections. Listeners are crucial in routing requests from clients to the registered instances based on the configured routing policies.

Table Usage Guide

The aws_ec2_load_balancer_listener table in Steampipe provides you with information about listeners for an Application Load Balancer or Network Load Balancer in Amazon Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query listener-specific details, including protocol, port, SSL policy, and associated actions. You can utilize this table to gather insights on listeners, such as their current state, default actions, and certificates. The schema outlines the various attributes of the Load Balancer Listener for you, including the listener ARN, load balancer ARN, default actions, and associated tags.

Examples

Load balancer listener basic info

Determine the areas in which your AWS EC2 load balancer listeners operate, by examining crucial details such as port and protocol. This information can be beneficial in optimizing network traffic management and troubleshooting connectivity issues.

select
  title,
  arn,
  port,
  protocol
from
  aws_ec2_load_balancer_listener;
select
  title,
  arn,
  port,
  protocol
from
  aws_ec2_load_balancer_listener;

Action configuration details of each load balancer

Explore the configuration details of each load balancer's actions to understand how they are set up for authentication, fixed responses, and target group stickiness. This can be useful in assessing the security and efficiency of your load balancing setup.

select
  title,
  arn,
  action ->> 'AuthenticateCognitoConfig' as authenticate_cognito_config,
  action ->> 'AuthenticateOidcConfig' as authenticate_Oidc_config,
  action ->> 'FixedResponseConfig' as fixed_response_config,
  action -> 'ForwardConfig' -> 'TargetGroupStickinessConfig' ->> 'DurationSeconds' as duration_seconds,
  action -> 'ForwardConfig' -> 'TargetGroupStickinessConfig' ->> 'Enabled' as target_group_stickiness_config_enabled
from
  aws_ec2_load_balancer_listener
  cross join jsonb_array_elements(default_actions) as action;
select
  title,
  arn,
  json_extract(action.value, '$.AuthenticateCognitoConfig') as authenticate_cognito_config,
  json_extract(action.value, '$.AuthenticateOidcConfig') as authenticate_Oidc_config,
  json_extract(action.value, '$.FixedResponseConfig') as fixed_response_config,
  json_extract(action.value, '$.ForwardConfig.TargetGroupStickinessConfig.DurationSeconds') as duration_seconds,
  json_extract(action.value, '$.ForwardConfig.TargetGroupStickinessConfig.Enabled') as target_group_stickiness_config_enabled
from
  aws_ec2_load_balancer_listener,
  json_each(default_actions) as action;

List of load balancer listeners which listen to HTTP protocol

Discover the segments that are using the HTTP protocol for load balancing. This is useful for identifying potential security risks, as HTTP traffic is unencrypted and can be intercepted.

select
  title,
  arn,
  port,
  protocol
from
  aws_ec2_load_balancer_listener
where
  protocol = 'HTTP';
select
  title,
  arn,
  port,
  protocol
from
  aws_ec2_load_balancer_listener
where
  protocol = 'HTTP';
title description
Steampipe Table: aws_ec2_managed_prefix_list - Query AWS EC2 Managed Prefix Lists using SQL
Allows users to query AWS EC2 Managed Prefix Lists, providing information about IP address ranges (CIDRs), permissions, and associated metadata.

Table: aws_ec2_managed_prefix_list - Query AWS EC2 Managed Prefix Lists using SQL

The AWS EC2 Managed Prefix List is a resource that allows you to create and manage prefix lists for your AWS account. These prefix lists are used to group IP address ranges and simplify the configuration of security group rules and route table entries. They are especially useful in managing large IP address ranges and maintaining security in your AWS environment.

There are two types of prefix lists:

  • Customer-managed prefix lists - Sets of IP address ranges that you define and manage. You can share your prefix list with other AWS accounts, enabling those accounts to reference the prefix list in their own resources.
  • AWS-managed prefix lists - Sets of IP address ranges for AWS services. You cannot create, modify, share, or delete an AWS-managed prefix list.

Table Usage Guide

The aws_ec2_managed_prefix_list table in Steampipe provides you with information about Managed Prefix Lists within AWS EC2. This table allows you as a DevOps engineer to query details about IP address ranges, permissions, and associated metadata. You can utilize this table to gather insights on IP address ranges, such as which IP addresses are allowed or denied access to a VPC, the maximum number of entries that a prefix list can have, and more. The schema outlines the various attributes of the Managed Prefix List for you, including the prefix list id, name, owner id, and associated tags.

Examples

Basic Info

Explore the ownership and status of your managed prefix lists in AWS EC2. This can help you understand who controls these resources and their current operational state.

select
  name,
  id,
  arn,
  state,
  owner_id
from
  aws_ec2_managed_prefix_list;
select
  name,
  id,
  arn,
  state,
  owner_id
from
  aws_ec2_managed_prefix_list;

List customer-managed prefix lists

Explore which customer-managed prefix lists are in use to gain insights into your AWS EC2 configurations. This helps identify any potential security risks or configuration issues.

select
  name,
  id,
  arn,
  state,
  owner_id
from
  aws_ec2_managed_prefix_list
where
  owner_id <> 'AWS';
select
  name,
  id,
  arn,
  state,
  owner_id
from
  aws_ec2_managed_prefix_list
where
  owner_id != 'AWS';

List prefix lists with IPv6 as IP address version

Determine the areas in which IPv6 is used as the IP address version within your managed prefix lists. This is useful for understanding your network's IPv6 usage and ensuring compatibility with IPv6-only systems.

select
  name,
  id,
  address_family
from
  aws_ec2_managed_prefix_list
where
  address_family = 'IPv6';
select
  name,
  id,
  address_family
from
  aws_ec2_managed_prefix_list
where
  address_family = 'IPv6';

List prefix lists by specific IDs

Determine the areas in which specific AWS EC2 managed prefix lists are being used by identifying them through their unique IDs. This query is beneficial in managing and tracking the usage of prefix lists in your AWS environment.

select
  name,
  id,
  arn,
  state,
  owner_id
from
  aws_ec2_managed_prefix_list
where
  id in ('pl-03a3e735e3467c0c4', 'pl-4ca54025');
select
  name,
  id,
  arn,
  state,
  owner_id
from
  aws_ec2_managed_prefix_list
where
  id in ('pl-03a3e735e3467c0c4', 'pl-4ca54025');

List prefix lists by specific names

Determine the areas in which specific managed prefix lists are used within the AWS EC2 service. This can be beneficial for understanding the configuration and usage of these lists in your cloud environment.

select
  name,
  id,
  arn,
  state,
  owner_id
from
  aws_ec2_managed_prefix_list
where
  name in ('testPrefix', 'com.amazonaws.us-east-2.dynamodb');
select
  name,
  id,
  arn,
  state,
  owner_id
from
  aws_ec2_managed_prefix_list
where
  name in ('testPrefix', 'com.amazonaws.us-east-2.dynamodb');

List prefix lists by a specific owner ID

Determine the areas in which specific AWS EC2 managed prefix lists are owned by a particular user. This is useful for understanding the distribution and ownership of these resources, helping to manage and organize your AWS environment effectively.

select
  name,
  id,
  arn,
  state,
  owner_id
from
  aws_ec2_managed_prefix_list
where
  owner_id = '632901234528';
select
  name,
  id,
  arn,
  state,
  owner_id
from
  aws_ec2_managed_prefix_list
where
  owner_id = '632901234528';
title description
Steampipe Table: aws_ec2_managed_prefix_list_entry - Query AWS EC2 Managed Prefix List Entry using SQL
Allows users to query AWS EC2 Managed Prefix List Entries, providing details such as the CIDR block, description, and the prefix list ID. This table is useful for understanding the IP address ranges included in a managed prefix list.

Table: aws_ec2_managed_prefix_list_entry - Query AWS EC2 Managed Prefix List Entry using SQL

The AWS EC2 Managed Prefix List Entry is a part of Amazon Elastic Compute Cloud (EC2) service. It helps you to manage IP address ranges, allowing you to create lists of IP address ranges, known as prefix lists, and use them to simplify the configuration of security groups and route tables. This makes it easier to set up, secure, and manage the network access to your Amazon EC2 instances.

Table Usage Guide

The aws_ec2_managed_prefix_list_entry table in Steampipe provides you with information about the IP address ranges, or prefixes, that AWS has added to a managed prefix list. This table allows you, as a DevOps engineer, to query prefix-specific details, including the CIDR block, description, and the prefix list ID. You can utilize this table to gather insights on the managed prefix lists, such as the IP address ranges included in a managed prefix list, and more. The schema outlines for you the various attributes of the managed prefix list entry, including the CIDR, description, and prefix list ID.

Examples

Basic Info

Explore which AWS EC2 managed prefix list entries exist in your environment. This can help you determine if there are any unexpected or unnecessary entries that may need to be addressed for security or efficiency reasons.

select
  prefix_list_id,
  cidr,
  description
from
  aws_ec2_managed_prefix_list_entry;
select
  prefix_list_id,
  cidr,
  description
from
  aws_ec2_managed_prefix_list_entry;

List customer-managed prefix lists entries

Explore which customer-managed prefix lists entries are owned by entities other than AWS. This can be useful to understand the distribution and ownership of these resources, helping you to manage and control access to your network resources.

select
  l.name,
  l.id,
  e.cidr,
  e.description,
  l.state,
  l.owner_id
from
  aws_ec2_managed_prefix_list_entry as e,
  aws_ec2_managed_prefix_list as l
where
  l.owner_id <> 'AWS';
select
  l.name,
  l.id,
  e.cidr,
  e.description,
  l.state,
  l.owner_id
from
  aws_ec2_managed_prefix_list_entry as e,
  aws_ec2_managed_prefix_list as l
where
  l.owner_id <> 'AWS';

Count prefix list entries by prefix list

Discover the segments that have varying numbers of entries in AWS EC2 managed prefix lists, providing a useful summary of the distribution of entries across different lists. This can assist in identifying any disproportionate allocation of entries which may require rebalancing.

select
  prefix_list_id,
  count(cidr) as numbers_of_entries
from
  aws_ec2_managed_prefix_list_entry
group by
  prefix_list_id;
select
  prefix_list_id,
  count(cidr) as numbers_of_entries
from
  aws_ec2_managed_prefix_list_entry
group by
  prefix_list_id;
title description
Steampipe Table: aws_ec2_network_interface - Query AWS EC2 Network Interfaces using SQL
Allows users to query AWS EC2 Network Interfaces and provides comprehensive details about each interface, including its associated instances, security groups, and subnet information.

Table: aws_ec2_network_interface - Query AWS EC2 Network Interfaces using SQL

An AWS EC2 Network Interface is a virtual network interface that you can attach to an instance in a VPC. Network interfaces are the point of networking for any instance that is attached to a Virtual Private Cloud (VPC). They can include a primary private IPv4 address, one or more secondary private IPv4 addresses, one Elastic IP address per private IPv4 address, one public IPv4 address, one or more IPv6 addresses, a MAC address, one or more security groups, a source/destination check flag, and a description.

Table Usage Guide

The aws_ec2_network_interface table in Steampipe provides you with information about Network Interfaces within AWS Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query network interface-specific details, including the attached instances, associated security groups, subnet information, and more. You can utilize this table to gather insights on network interfaces, such as their status, type, private and public IP addresses, and the associated subnet and VPC details. The schema outlines for you the various attributes of the EC2 network interface, including the interface ID, description, owner ID, availability zone, and associated tags.

Examples

Basic IP address info

Determine the areas in which your AWS EC2 network interfaces are operating by exploring the type of interface, its corresponding private and public IP addresses, and its MAC address. This can be particularly useful for managing network connectivity and troubleshooting network issues within your AWS environment.

select
  network_interface_id,
  interface_type,
  description,
  private_ip_address,
  association_public_ip,
  mac_address
from
  aws_ec2_network_interface;
select
  network_interface_id,
  interface_type,
  description,
  private_ip_address,
  association_public_ip,
  mac_address
from
  aws_ec2_network_interface;

Find all ENIs with private IPs that are in a given subnet (10.66.0.0/16)

Discover the segments that have private IPs within a specific subnet. This is useful for identifying network interfaces within a particular subnet, which can aid in network management and security assessment.

select
  network_interface_id,
  interface_type,
  description,
  private_ip_address,
  association_public_ip,
  mac_address
from
  aws_ec2_network_interface
where
  private_ip_address :: cidr <<= '10.66.0.0/16';
Error: SQLite does not support CIDR operations.

Count of ENIs by interface type

Discover the segments that have the most network interfaces in your AWS EC2 environment, helping you understand your network configuration and potentially optimize resource allocation.

select
  interface_type,
  count(interface_type) as count
from
  aws_ec2_network_interface
group by
  interface_type
order by
  count desc;
select
  interface_type,
  count(interface_type) as count
from
  aws_ec2_network_interface
group by
  interface_type
order by
  count desc;

Security groups attached to each ENI

Determine the areas in which certain security groups are attached to each network interface within your Amazon EC2 instances. This can help in managing security and access controls effectively.

select
  network_interface_id as eni,
  sg ->> 'GroupId' as "security group id",
  sg ->> 'GroupName' as "security group name"
from
  aws_ec2_network_interface
  cross join jsonb_array_elements(groups) as sg
order by
  eni;
select
  network_interface_id as eni,
  json_extract(sg, '$.GroupId') as "security group id",
  json_extract(sg, '$.GroupName') as "security group name"
from
  (
    select
      network_interface_id,
      json_each.value as sg
    from
      aws_ec2_network_interface,
      json_each(groups)
  )
order by
  eni;

Get network details for each ENI

Discover the segments that are common between your network interfaces and virtual private clouds (VPCs) to better understand your network structure. This can assist in identifying areas for potential consolidation or optimization.

select
  e.network_interface_id,
  v.vpc_id,
  v.is_default,
  v.cidr_block,
  v.state,
  v.account_id,
  v.region
from
  aws_ec2_network_interface e,
  aws_vpc v
where 
  e.vpc_id = v.vpc_id;
select
  e.network_interface_id,
  v.vpc_id,
  v.is_default,
  v.cidr_block,
  v.state,
  v.account_id,
  v.region
from
  aws_ec2_network_interface e
join
  aws_vpc v
on 
  e.vpc_id = v.vpc_id;
title description
Steampipe Table: aws_ec2_network_load_balancer - Query AWS EC2 Network Load Balancer using SQL
Allows users to query AWS EC2 Network Load Balancer data including configuration, status, and other related information.

Table: aws_ec2_network_load_balancer - Query AWS EC2 Network Load Balancer using SQL

The AWS EC2 Network Load Balancer is a high-performance load balancer that operates at the transport layer (Layer 4) and is designed to handle millions of requests per second while maintaining ultra-low latencies. It is best suited for load balancing of TCP traffic and capable of handling volatile workloads and traffic patterns. It also supports long-lived TCP connections, which are ideal for WebSocket type of applications.

Table Usage Guide

The aws_ec2_network_load_balancer table in Steampipe provides you with information about Network Load Balancers within AWS Elastic Compute Cloud (EC2). This table allows you, as a cloud administrator or DevOps engineer, to query load balancer-specific details, including type, state, availability zones, and associated metadata. You can utilize this table to gather insights on load balancers, such as their current status, associated subnets, and more. The schema outlines the various attributes of the Network Load Balancer for you, including the load balancer name, ARN, creation date, DNS name, scheme, and associated tags.

Examples

Count of AZs registered with network load balancers

Analyze the distribution of network load balancers across various availability zones to optimize resource allocation and ensure a balanced load. This can help in enhancing the application's performance and availability.

select
  name,
  count(az ->> 'ZoneName') as zone_count
from
  aws_ec2_network_load_balancer
  cross join jsonb_array_elements(availability_zones) as az
group by
  name;
select
  name,
  count(json_extract(az.value, '$.ZoneName')) as zone_count
from
  aws_ec2_network_load_balancer,
  json_each(availability_zones) as az
group by
  name;

List of network load balancers where Cross-Zone Load Balancing is enabled

Determine the areas in which Cross-Zone Load Balancing is enabled for network load balancers. This can be particularly useful to identify potential areas of network inefficiency or to optimize load balancing across zones.

select
  name,
  lb ->> 'Key' as cross_zone,
  lb ->> 'Value' as cross_zone_value
from
  aws_ec2_network_load_balancer
  cross join jsonb_array_elements(load_balancer_attributes) as lb
where
  lb ->> 'Key' = 'load_balancing.cross_zone.enabled'
  and lb ->> 'Value' = 'false';
select
  name,
  json_extract(lb.value, '$.Key') as cross_zone,
  json_extract(lb.value, '$.Value') as cross_zone_value
from
  aws_ec2_network_load_balancer,
  json_each(load_balancer_attributes) as lb
where
  json_extract(lb.value, '$.Key') = 'load_balancing.cross_zone.enabled'
  and json_extract(lb.value, '$.Value') = 'false';

List of network load balancers where logging is not enabled

Determine the areas in your network load balancers where logging is not enabled. This is essential for identifying potential security risks and ensuring compliance with data governance policies.

select
  name,
  lb ->> 'Key' as logging_key,
  lb ->> 'Value' as logging_value
from
  aws_ec2_network_load_balancer
  cross join jsonb_array_elements(load_balancer_attributes) as lb
where
  lb ->> 'Key' = 'access_logs.s3.enabled'
  and lb ->> 'Value' = 'false';
select
  name,
  json_extract(lb.value, '$.Key') as logging_key,
  json_extract(lb.value, '$.Value') as logging_value
from
  aws_ec2_network_load_balancer,
  json_each(load_balancer_attributes) as lb
where
  json_extract(lb.value, '$.Key') = 'access_logs.s3.enabled'
  and json_extract(lb.value, '$.Value') = 'false';

List of network load balancers where deletion protection is not enabled

Determine the areas in your network where load balancers are potentially vulnerable due to deletion protection not being enabled. This is particularly useful for identifying potential risks and ensuring the security and stability of your network.

select
  name,
  lb ->> 'Key' as deletion_protection_key,
  lb ->> 'Value' as deletion_protection_value
from
  aws_ec2_network_load_balancer
  cross join jsonb_array_elements(load_balancer_attributes) as lb
where
  lb ->> 'Key' = 'deletion_protection.enabled'
  and lb ->> 'Value' = 'false';
select
  name,
  json_extract(lb.value, '$.Key') as deletion_protection_key,
  json_extract(lb.value, '$.Value') as deletion_protection_value
from
  aws_ec2_network_load_balancer,
  json_each(load_balancer_attributes) as lb
where
  json_extract(lb.value, '$.Key') = 'deletion_protection.enabled'
  and json_extract(lb.value, '$.Value') = 'false';
title description
Steampipe Table: aws_ec2_network_load_balancer_metric_net_flow_count - Query AWS EC2 Network Load Balancer Metrics using SQL
Allows users to query AWS EC2 Network Load Balancer Metrics for net flow count data. This includes information such as the number of new or terminated flows per minute from a network load balancer.

Table: aws_ec2_network_load_balancer_metric_net_flow_count - Query AWS EC2 Network Load Balancer Metrics using SQL

The AWS EC2 Network Load Balancer is a high-performance load balancer that operates at the transport layer (Layer 4). It is designed to handle volatile traffic patterns and millions of requests per second for your applications. It can automatically scale to meet the needs of your applications, and you can enable cross-zone load balancing to distribute traffic evenly across all registered instances in all enabled Availability Zones.

Table Usage Guide

The aws_ec2_network_load_balancer_metric_net_flow_count table in Steampipe provides you with information about the net flow count metrics of AWS EC2 Network Load Balancers. This table allows you, as a DevOps engineer, to query net flow count-specific details, including the number of new or terminated flows per minute. You can utilize this table to gather insights on network load balancing, such as monitoring the amount of traffic processed by your load balancer, identifying trends in network traffic, and more. The schema outlines the various attributes of the net flow count metric, including the load balancer name, namespace, metric name, and dimensions.

The aws_ec2_network_load_balancer_metric_net_flow_count table provides you with metric statistics at 5 min intervals for the most recent 5 days.

Examples

Basic info

Analyze the metrics of your AWS EC2 network load balancer to understand its performance over time. This will help you identify instances where the load balance may be skewed or inefficient, allowing for timely adjustments and improved resource management.

select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  sample_count,
  timestamp
from
  aws_ec2_network_load_balancer_metric_net_flow_count
order by
  name,
  timestamp;
select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  sample_count,
  timestamp
from
  aws_ec2_network_load_balancer_metric_net_flow_count
order by
  name,
  timestamp;

Intervals where net flow count < 100

Explore instances where the average network load balance metric net flow count is less than 100, which can be useful in identifying periods of low network traffic for AWS EC2 instances. This can be beneficial in optimizing resource allocation and understanding usage patterns.

select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  average,
  sample_count,
  timestamp
from
  aws_ec2_network_load_balancer_metric_net_flow_count
where
  average < 100
order by
  name,
  timestamp;
select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  average,
  sample_count,
  timestamp
from
  aws_ec2_network_load_balancer_metric_net_flow_count
where
  average < 100
order by
  name,
  timestamp;
title description
Steampipe Table: aws_ec2_network_load_balancer_metric_net_flow_count_daily - Query AWS EC2 Network Load Balancer Metrics using SQL
Allows users to query Network Load Balancer Metrics in EC2, specifically the daily net flow count, providing insights into network traffic patterns and potential anomalies.

Table: aws_ec2_network_load_balancer_metric_net_flow_count_daily - Query AWS EC2 Network Load Balancer Metrics using SQL

The AWS EC2 Network Load Balancer is a fully managed service that automatically distributes incoming traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions. It can handle the varying load of your applications in a single Availability Zone or across multiple Availability Zones. The 'NetFlowCount' metric provides the total number of new TCP/UDP flows established from clients to targets in a specified time period.

Table Usage Guide

The aws_ec2_network_load_balancer_metric_net_flow_count_daily table in Steampipe provides you with information about network load balancer metrics within AWS Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query daily net flow count details, including timestamp, average, minimum, maximum, and sum. You can utilize this table to gather insights on network traffic patterns, detect potential network anomalies, and optimize load balancing strategies. The schema outlines the various attributes of the network load balancer metric for you, including the load balancer name, namespace, region, and metric unit.

The aws_ec2_network_load_balancer_metric_net_flow_count_daily table provides you with metric statistics at 24-hour intervals for the most recent 1 year.

Examples

Basic info

Explore which network load balancers in your AWS EC2 environment have the highest and lowest daily net flow counts. This allows you to gain insights into the performance and usage patterns of your load balancers over time.

select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  sample_count,
  timestamp
from
  aws_ec2_network_load_balancer_metric_net_flow_count_daily
order by
  name,
  timestamp;
select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  sample_count,
  timestamp
from
  aws_ec2_network_load_balancer_metric_net_flow_count_daily
order by
  name,
  timestamp;

Intervals where net flow count < 100

Determine the intervals where the average daily network flow count is less than 100 for AWS EC2 Network Load Balancer. This can be useful in identifying periods of low traffic, which could indicate underutilization or potential opportunities for cost-saving.

select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  average,
  sample_count,
  timestamp
from
  aws_ec2_network_load_balancer_metric_net_flow_count_daily
where
  average < 100
order by
  name,
  timestamp;
select
  name,
  metric_name,
  namespace,
  maximum,
  minimum,
  average,
  sample_count,
  timestamp
from
  aws_ec2_network_load_balancer_metric_net_flow_count_daily
where
  average < 100
order by
  name,
  timestamp;
title description
Steampipe Table: aws_ec2_regional_settings - Query AWS EC2 Regional Settings using SQL
Allows users to query AWS EC2 regional settings, including default EBS encryption and default EBS encryption KMS key.

Table: aws_ec2_regional_settings - Query AWS EC2 Regional Settings using SQL

The AWS EC2 Regional Settings are configurations that apply to an entire region in the Amazon Elastic Compute Cloud (EC2) service. These settings include options such as default VPC, default subnet, and default security group. They allow for the customization and management of resources within a specific AWS region.

Table Usage Guide

The aws_ec2_regional_settings table in Steampipe provides you with information about the regional settings of Amazon Elastic Compute Cloud (EC2). This table allows you, as a cloud administrator, security team member, or developer, to query regional settings, including default EBS encryption and the default EBS encryption KMS key. You can utilize this table to gather insights on regional settings, such as the default EBS encryption status, the default EBS encryption KMS key, and the region name. The schema outlines the various attributes of the regional settings for you, including the region, default EBS encryption, and default EBS encryption KMS key.

Examples

Basic settings info

Analyze the settings to understand the default encryption status and key for your AWS EC2 regional settings. This is useful for ensuring your data is secure and encrypted as per your organization's policies.

select
  default_ebs_encryption_enabled,
  default_ebs_encryption_key,
  title,
  region
from
  aws_ec2_regional_settings;
select
  default_ebs_encryption_enabled,
  default_ebs_encryption_key,
  title,
  region
from
  aws_ec2_regional_settings;

Settings info for a particular region

Determine the areas in which default encryption is enabled for a specific region. This query is beneficial for understanding the security configuration of your cloud storage in that particular region.

select
  default_ebs_encryption_enabled,
  default_ebs_encryption_key,
  title,
  region
from
  aws_ec2_regional_settings
where
  region = 'ap-south-1';
select
  default_ebs_encryption_enabled,
  default_ebs_encryption_key,
  title,
  region
from
  aws_ec2_regional_settings
where
  region = 'ap-south-1';

List the regions along with the key where default EBS encryption is enabled

Identify regions where the default EBS encryption is enabled. This is useful for maintaining data security and compliance by ensuring that encrypted storage is being utilized in those areas.

select
  region,
  default_ebs_encryption_enabled,
  default_ebs_encryption_key
from
  aws_ec2_regional_settings
where
  default_ebs_encryption_enabled;
select
  region,
  default_ebs_encryption_enabled,
  default_ebs_encryption_key
from
  aws_ec2_regional_settings
where
  default_ebs_encryption_enabled = 1;
title description
Steampipe Table: aws_ec2_reserved_instance - Query AWS EC2 Reserved Instances using SQL
Allows users to query AWS EC2 Reserved Instances to gather comprehensive insights on the reserved instances, such as their configurations, state, and associated tags.

Table: aws_ec2_reserved_instance - Query AWS EC2 Reserved Instances using SQL

The AWS EC2 Reserved Instances are a type of Amazon EC2 instance that allows you to reserve compute capacity for your AWS account in a specific Availability Zone, providing a significant discount compared to On-Demand pricing. These instances are recommended for applications with steady state usage, offering up to 75% savings compared to on-demand instances. AWS EC2 Reserved Instances can be purchased with a one-time payment and used throughout the term you select.

Table Usage Guide

The aws_ec2_reserved_instance table in Steampipe provides you with information about Reserved Instances within Amazon Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query reserved instance-specific details, including instance type, offering class, and state. You can utilize this table to gather insights on reserved instances, such as their configurations, reserved instance state, and associated tags. The schema outlines the various attributes of the reserved instance for you, including its ARN, instance type, offering class, and associated tags.

Examples

Basic Info

Determine the areas in which you can gain insights into your Amazon EC2 reserved instances, such as understanding the instance type, state, and costs associated with the reservation. This is useful for managing your AWS resources and optimizing your cloud cost.

select
  reserved_instance_id,
  arn,
  instance_type,
  instance_state,
  currency_code,
  CAST(fixed_price AS varchar),
  offering_class, scope,
  CAST(usage_price AS varchar)
from
  aws_ec2_reserved_instance;
select
  reserved_instance_id,
  arn,
  instance_type,
  instance_state,
  currency_code,
  CAST(fixed_price AS text),
  offering_class, scope,
  CAST(usage_price AS text)
from
  aws_ec2_reserved_instance;

Count reserved instances by instance type

Determine the number of reserved instances per type to better manage your AWS EC2 resources and optimize your cloud infrastructure.

select
  instance_type,
  count(instance_count) as count
from
  aws_ec2_reserved_instance
group by
  instance_type;
select
  instance_type,
  count(instance_count) as count
from
  aws_ec2_reserved_instance
group by
  instance_type;

List reserved instances provisioned with undesired(for example t2.large and m3.medium is desired) instance type(s)

Determine the areas in which the provisioned reserved instances are not of the desired types such as t2.large and m3.medium. This can help in optimizing resources and better cost management.

select
  instance_type,
  count(*) as count
from
  aws_ec2_reserved_instance
where
  instance_type not in ('t2.large', 'm3.medium')
group by
  instance_type;
select
  instance_type,
  count(*) as count
from
  aws_ec2_reserved_instance
where
  instance_type not in ('t2.large', 'm3.medium')
group by
  instance_type;

List standard offering class type reserved instances

Discover the segments that consist of standard offering class type within reserved instances in AWS EC2, which can assist in better management of resource allocation and cost optimization.

select
  reserved_instance_id,
  instance_type,
  offering_class
from
  aws_ec2_reserved_instance
where
  offering_class = 'standard';
select
  reserved_instance_id,
  instance_type,
  offering_class
from
  aws_ec2_reserved_instance
where
  offering_class = 'standard';

List active reserved instances

Determine the areas in which active reserved instances are being utilized within your AWS EC2 service. This can help in managing resources and optimizing costs by identifying instances that are currently in active use.

select
  reserved_instance_id,
  instance_type,
  instance_state
from
  aws_ec2_reserved_instance
where
  instance_state = 'active';
select
  reserved_instance_id,
  instance_type,
  instance_state
from
  aws_ec2_reserved_instance
where
  instance_state = 'active';
title description
Steampipe Table: aws_ec2_spot_price - Query AWS EC2 Spot Price using SQL
Allows users to query AWS EC2 Spot Price data, including information about the instance type, product description, spot price, and the date and time the price was set.

Table: aws_ec2_spot_price - Query AWS EC2 Spot Price using SQL

The AWS EC2 Spot Price is a feature of Amazon Elastic Compute Cloud (EC2) that allows you to bid on spare Amazon EC2 computing capacity. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and test & development workloads.

Table Usage Guide

The aws_ec2_spot_price table in Steampipe provides you with information about the spot price of EC2 instances within Amazon Web Services (AWS). This table allows you, as a DevOps engineer, to query spot price-specific details, including the instance type, product description, spot price, and the date and time the price was set. You can utilize this table to gather insights on EC2 spot prices, such as the historical price trends, comparison of prices across different instance types, and to make cost-effective decisions. The schema outlines the various attributes of the EC2 spot price for you, including the availability zone, instance type, product description, spot price, and timestamp.

Examples

List EC2 spot prices for Linux m5.4xlarge instance in eu-west-3a and eu-west-3b availability zones in the last month

Explore the fluctuations in spot prices for a specific Linux instance type in certain availability zones over the past month. This can help determine the most cost-effective times to run instances and optimize cloud expenditure.

select
  availability_zone,
  instance_type,
  product_description,
  spot_price::numeric as spot_price,
  create_timestamp as start_time,
  lead(create_timestamp, 1, now()) over (partition by instance_type, availability_zone, product_description order by create_timestamp) as stop_time
from
  aws_ec2_spot_price
where
  instance_type = 'm5.4xlarge'
  and product_description = 'Linux/UNIX'
  and availability_zone in
  (
    'eu-west-3a',
    'eu-west-3b'
  )
  and start_time = now() - interval '1' month
  and end_time = now() - interval '1' minute;
select
  availability_zone,
  instance_type,
  product_description,
  cast(spot_price as real) as spot_price,
  create_timestamp as start_time,
  (
    select min(create_timestamp) 
    from aws_ec2_spot_price as b 
    where 
      b.instance_type = a.instance_type 
      and b.availability_zone = a.availability_zone 
      and b.product_description = a.product_description 
      and b.create_timestamp > a.create_timestamp
  ) as stop_time
from
  aws_ec2_spot_price as a
where
  instance_type = 'm5.4xlarge'
  and product_description = 'Linux/UNIX'
  and availability_zone in
  (
    'eu-west-3a',
    'eu-west-3b'
  )
  and start_time >= datetime('now', '-1 month')
  and end_time <= datetime('now', '-1 minute');
title description
Steampipe Table: aws_ec2_ssl_policy - Query AWS EC2 SSL Policies using SQL
Allows users to query AWS EC2 SSL Policies to retrieve detailed information about SSL policies used in AWS EC2 Load Balancers.

Table: aws_ec2_ssl_policy - Query AWS EC2 SSL Policies using SQL

The AWS EC2 SSL Policies are predefined security policies that determine the SSL/TLS protocol that an AWS EC2 instance uses when it's communicating with clients. These policies help to establish the ciphers and protocols that services like Elastic Load Balancing use when negotiating SSL/TLS connections. They can be customized to meet specific security requirements, ensuring secure and reliable client-to-server communications.

Table Usage Guide

The aws_ec2_ssl_policy table in Steampipe provides you with information about SSL policies used in AWS Elastic Compute Cloud (EC2) Load Balancers. This table allows you as a developer or cloud architect to query SSL policy-specific details, including the policy name, the SSL protocols, and the cipher suite configurations. You can utilize this table to gather insights on the SSL policies, such as enabled SSL protocols, preferred cipher suites, and more. The schema outlines the various attributes of the SSL policy for you, including the policy name, the SSL protocols, the SSL ciphers, and the server order preference.

Examples

Basic info

Determine the areas in which your AWS EC2 instances are using certain SSL protocols. This can be beneficial for identifying potential security risks and ensuring that your instances are configured to use the most secure protocols.

select
  name,
  ssl_protocols
from
  aws_ec2_ssl_policy;
select
  name,
  ssl_protocols
from
  aws_ec2_ssl_policy;

List load balancer listeners that use an SSL policy with weak ciphers

Identify the load balancer listeners that are using an SSL policy with weak ciphers. This is beneficial for enhancing the security of your applications by pinpointing potential vulnerabilities.

select
  arn,
  ssl_policy
from
  aws_ec2_load_balancer_listener listener
join 
  aws_ec2_ssl_policy ssl_policy
on
  listener.ssl_policy = ssl_policy.Name
where
  ssl_policy.ciphers @> '[{"Name":"DES-CBC3-SHA"}]';
select
  arn,
  ssl_policy
from
  aws_ec2_load_balancer_listener listener
join 
  aws_ec2_ssl_policy ssl_policy
on
  listener.ssl_policy = ssl_policy.Name
where
  json_extract(ssl_policy.ciphers, '$[*].Name') LIKE '%DES-CBC3-SHA%';
title description
Steampipe Table: aws_ec2_target_group - Query AWS EC2 Target Groups using SQL
Allows users to query AWS EC2 Target Groups and provides information about each Target Group within an AWS account.

Table: aws_ec2_target_group - Query AWS EC2 Target Groups using SQL

An AWS EC2 Target Group is a component of the Elastic Load Balancing service. It is used to route requests to one or more registered targets, such as EC2 instances, as part of a load balancing configuration. This allows the distribution of network traffic to multiple resources, improving availability and fault tolerance in your applications.

Table Usage Guide

The aws_ec2_target_group table in Steampipe provides you with information about each Target Group within your AWS account. This table allows you, as a DevOps engineer, security auditor, or other technical professional, to query Target Group-specific details, including the associated load balancer, health check configuration, and attributes. You can utilize this table to gather insights on Target Groups, such as their configurations, associated resources, and more. The schema outlines the various attributes of the Target Group for you, including the ARN, Health Check parameters, and associated tags.

Examples

Basic target group info

Explore the different target groups within your AWS EC2 instances to understand their associated load balancer resources and the virtual private cloud (VPC) they belong to. This can help in managing and optimizing your cloud resources effectively.

select
  target_group_name,
  target_type,
  load_balancer_arns,
  vpc_id
from
  aws_ec2_target_group;
select
  target_group_name,
  target_type,
  load_balancer_arns,
  vpc_id
from
  aws_ec2_target_group;

Health check info of target groups

This query is used to gain insights into the health check configurations of target groups within an AWS EC2 environment. Its practical application lies in its ability to help identify potential issues or vulnerabilities in the system, ensuring optimal performance and security.

select
  health_check_enabled,
  protocol,
  matcher_http_code,
  healthy_threshold_count,
  unhealthy_threshold_count,
  health_check_enabled,
  health_check_interval_seconds,
  health_check_path,
  health_check_port,
  health_check_protocol,
  health_check_timeout_seconds
from
  aws_ec2_target_group;
select
  health_check_enabled,
  protocol,
  matcher_http_code,
  healthy_threshold_count,
  unhealthy_threshold_count,
  health_check_enabled,
  health_check_interval_seconds,
  health_check_path,
  health_check_port,
  health_check_protocol,
  health_check_timeout_seconds
from
  aws_ec2_target_group;

Registered target for each target group

Determine the areas in which each registered target is located for a specific target group. This can be useful for identifying potential issues with load balancing or for optimizing resource allocation across different availability zones.

select
  target_group_name,
  target_type,
  target -> 'Target' ->> 'AvailabilityZone' as availability_zone,
  target -> 'Target' ->> 'Id' as id,
  target -> 'Target' ->> 'Port' as port
from
  aws_ec2_target_group
  cross join jsonb_array_elements(target_health_descriptions) as target;
select
  target_group_name,
  target_type,
  json_extract(target.value, '$.Target.AvailabilityZone') as availability_zone,
  json_extract(target.value, '$.Target.Id') as id,
  json_extract(target.value, '$.Target.Port') as port
from
  aws_ec2_target_group,
  json_each(target_health_descriptions) as target;

Health status of registered targets

Identify instances where the health status of registered targets in EC2 instances can be assessed. This allows for proactive management of resources by pinpointing potential issues or disruptions in the target groups.

select
  target_group_name,
  target_type,
  target -> 'TargetHealth' ->> 'Description' as description,
  target -> 'TargetHealth' ->> 'Reason' reason,
  target -> 'TargetHealth' ->> 'State' as state
from
  aws_ec2_target_group
  cross join jsonb_array_elements(target_health_descriptions) as target;
select
  target_group_name,
  target_type,
  json_extract(target.value, '$.TargetHealth.Description') as description,
  json_extract(target.value, '$.TargetHealth.Reason') as reason,
  json_extract(target.value, '$.TargetHealth.State') as state
from
  aws_ec2_target_group,
  json_each(target_health_descriptions) as target;
title description
Steampipe Table: aws_ec2_transit_gateway - Query AWS EC2 Transit Gateway using SQL
Allows users to query AWS EC2 Transit Gateway resources for detailed information on configuration, status, and associations.

Table: aws_ec2_transit_gateway - Query AWS EC2 Transit Gateway using SQL

The AWS EC2 Transit Gateway is a service that simplifies the process of networking connectivity across multiple Amazon Virtual Private Clouds (VPCs) and on-premises networks. It acts as a hub that controls how traffic is routed among all connected networks which simplifies your network architecture. With Transit Gateway, you can manage connectivity for thousands of VPCs, easily scale connectivity across multiple AWS accounts, and segregate your network traffic to improve security.

Table Usage Guide

The aws_ec2_transit_gateway table in Steampipe provides you with information about Transit Gateways within Amazon Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query Transit Gateway-specific details, including its configuration, state, and associations. You can utilize this table to gather insights on Transit Gateways, such as its attached VPCs, VPN connections, Direct Connect gateways, and more. The schema outlines the various attributes of the Transit Gateway for you, including the transit gateway ID, creation time, state, and associated tags.

Examples

Basic Transit Gateway info

Gain insights into the status and ownership details of your AWS Transit Gateway configurations, along with their creation times, to better manage your network transit connectivity. This can be particularly useful for auditing, tracking changes, and troubleshooting network issues.

select
  transit_gateway_id,
  state,
  owner_id,
  creation_time
from
  aws_ec2_transit_gateway;
select
  transit_gateway_id,
  state,
  owner_id,
  creation_time
from
  aws_ec2_transit_gateway;

List transit gateways which automatically accepts shared account attachment

Determine the areas in which transit gateways are set to automatically accept shared account attachments. This is useful to identify potential security risks and ensure proper management of your AWS resources.

select
  transit_gateway_id,
  auto_accept_shared_attachments
from
  aws_ec2_transit_gateway
where
  auto_accept_shared_attachments = 'enable';
select
  transit_gateway_id,
  auto_accept_shared_attachments
from
  aws_ec2_transit_gateway
where
  auto_accept_shared_attachments = 'enable';

Find the number of transit gateways by default route table id

Determine the areas in which transit gateways are most commonly associated by default route table ID, which can aid in understanding network traffic distribution and optimizing resource allocation within your AWS EC2 environment.

select
  association_default_route_table_id,
  count(transit_gateway_id) as transit_gateway
from
  aws_ec2_transit_gateway
group by
  association_default_route_table_id;
select
  association_default_route_table_id,
  count(transit_gateway_id) as transit_gateway
from
  aws_ec2_transit_gateway
group by
  association_default_route_table_id;

Map all transit gateways to the application to which they belong with an application tag

Discover the segments that have transit gateways without an application tag, enabling you to identify and categorize untagged resources for better resource management and organization.

select
  transit_gateway_id,
  tags
from
  aws_ec2_transit_gateway
where
  not tags :: JSONB ? 'application';
select
  transit_gateway_id,
  tags
from
  aws_ec2_transit_gateway
where
  json_extract(tags, '$.application') IS NULL;
title description
Steampipe Table: aws_ec2_transit_gateway_route - Query AWS EC2 Transit Gateway Routes using SQL
Allows users to query AWS EC2 Transit Gateway Routes for detailed information about each route, including the destination CIDR block, the route's current state, and the transit gateway attachments.

Table: aws_ec2_transit_gateway_route - Query AWS EC2 Transit Gateway Routes using SQL

The AWS EC2 Transit Gateway Routes enable you to manage connectivity between multiple Virtual Private Clouds (VPCs) and on-premises networks by acting as a hub. They simplify network architecture by reducing the number of connections required to connect multiple VPCs and on-premises networks. Transit Gateway Routes also provide flexible routing policies to support various types of network architectures.

Table Usage Guide

The aws_ec2_transit_gateway_route table in Steampipe provides you with information about the routes in each transit gateway within AWS EC2. This table allows you, as a DevOps engineer, to query route-specific details, including the destination CIDR block, the route's current state, and the transit gateway attachments. You can utilize this table to gather insights on routes, such as verifying the transit gateway route's state, checking the destination CIDR block, and more. The schema outlines the various attributes of the transit gateway route for you, including the transit gateway route ID, transit gateway route destination CIDR block, and associated tags.

Examples

Basic info

Explore the configuration of your AWS EC2 transit gateway routes to understand their current state and type. This can help you identify potential network routing issues or areas for optimization.

select
  transit_gateway_route_table_id,
  destination_cidr_block,
  prefix_list_id,
  state,
  type
from
  aws_ec2_transit_gateway_route;
select
  transit_gateway_route_table_id,
  destination_cidr_block,
  prefix_list_id,
  state,
  type
from
  aws_ec2_transit_gateway_route;

List active routes

Explore which transit gateway routes are currently active to manage network traffic effectively. This is useful for maintaining network efficiency and ensuring optimal route configurations.

select
  transit_gateway_route_table_id,
  destination_cidr_block,
  state,
  type
from
  aws_ec2_transit_gateway_route
where
  state = 'active';
select
  transit_gateway_route_table_id,
  destination_cidr_block,
  state,
  type
from
  aws_ec2_transit_gateway_route
where
  state = 'active';
title description
Steampipe Table: aws_ec2_transit_gateway_route_table - Query AWS EC2 Transit Gateway Route Tables using SQL
Allows users to query AWS EC2 Transit Gateway Route Tables and retrieve detailed information about each route table, including its ID, state, transit gateway ID, and other associated metadata.

Table: aws_ec2_transit_gateway_route_table - Query AWS EC2 Transit Gateway Route Tables using SQL

The AWS EC2 Transit Gateway Route Table is a component of Amazon's Elastic Compute Cloud (EC2) service that allows you to manage routing for your Transit Gateways. It facilitates the control of traffic between different networks within your cloud environment. Using this resource, you can define rules that determine the path network traffic takes to reach a specific destination.

Table Usage Guide

The aws_ec2_transit_gateway_route_table table in Steampipe provides you with information about each route table associated with a transit gateway within your Amazon Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query route table-specific details, including the transit gateway ID, route table ID, state, and associated tags. You can utilize this table to gather insights on transit gateway route tables, such as their current state, associated transit gateways, and more. The schema outlines the various attributes of the transit gateway route table for you, including the route table ID, transit gateway ID, creation time, and associated tags.

Examples

Basic transit gateway route table info

Explore the fundamental characteristics of your transit gateway route tables in AWS EC2. This query is useful in understanding the default associations and propagations within your route tables, aiding in efficient network management.

select
  transit_gateway_route_table_id,
  transit_gateway_id,
  default_association_route_table,
  default_propagation_route_table
from
  aws_ec2_transit_gateway_route_table;
select
  transit_gateway_route_table_id,
  transit_gateway_id,
  default_association_route_table,
  default_propagation_route_table
from
  aws_ec2_transit_gateway_route_table;

Count of transit gateway route table by transit gateway

Explore which transit gateways are associated with numerous route tables in your AWS EC2 service. This can be useful for optimizing network routing paths and managing network resources effectively.

select
  transit_gateway_id,
  count(transit_gateway_route_table_id) as transit_gateway_route_table_count
from
  aws_ec2_transit_gateway_route_table
group by
  transit_gateway_id;
select
  transit_gateway_id,
  count(transit_gateway_route_table_id) as transit_gateway_route_table_count
from
  aws_ec2_transit_gateway_route_table
group by
  transit_gateway_id;
title description
Steampipe Table: aws_ec2_transit_gateway_vpc_attachment - Query AWS EC2 Transit Gateway VPC Attachments using SQL
Allows users to query AWS EC2 Transit Gateway VPC Attachments for details such as the attachment state, creation time, and more.

Table: aws_ec2_transit_gateway_vpc_attachment - Query AWS EC2 Transit Gateway VPC Attachments using SQL

The AWS EC2 Transit Gateway VPC Attachment is a resource that allows you to attach an Amazon VPC to a transit gateway. This attachment enables connectivity between the VPC and other networks connected to the transit gateway. It simplifies network architecture, reduces operational overhead, and provides a central gateway for connectivity.

Table Usage Guide

The aws_ec2_transit_gateway_vpc_attachment table in Steampipe provides you with information about the attachments between Virtual Private Clouds (VPCs) and transit gateways in Amazon Elastic Compute Cloud (EC2). This table allows you, as a DevOps engineer, to query attachment-specific details, including the attachment state, creation time, and associated metadata. You can utilize this table to gather insights on attachments, such as their status, the VPCs they are associated with, the transit gateways they are connected to, and more. The schema outlines the various attributes of the transit gateway VPC attachment for you, including the attachment ID, transit gateway ID, VPC ID, and associated tags.

Examples

Basic transit gateway vpc attachment info

Determine the areas in which your AWS EC2 Transit Gateway is attached to a VPC. This helps you understand the status and ownership of these connections, as well as when they were created.

select
  transit_gateway_attachment_id,
  transit_gateway_id,
  state,
  transit_gateway_owner_id,
  creation_time,
  association_state
from
  aws_ec2_transit_gateway_vpc_attachment;
select
  transit_gateway_attachment_id,
  transit_gateway_id,
  state,
  transit_gateway_owner_id,
  creation_time,
  association_state
from
  aws_ec2_transit_gateway_vpc_attachment;

Count of transit gateway vpc attachment by transit gateway id

Analyze your AWS EC2 Transit Gateway setup to understand the distribution of VPC attachments across different types of resources. This could be useful in optimizing resource allocation and identifying potential areas for cost savings.

select
  resource_type,
  count(transit_gateway_attachment_id) as count
from
  aws_ec2_transit_gateway_vpc_attachment
group by
  resource_type;
select
  resource_type,
  count(transit_gateway_attachment_id) as count
from
  aws_ec2_transit_gateway_vpc_attachment
group by
  resource_type;
title description
Steampipe Table: aws_ecr_image - Query Amazon ECR Images using SQL
Allows users to query Amazon Elastic Container Registry (ECR) Images and retrieve detailed information about each image, including image tags, push timestamps, image sizes, and more.

Table: aws_ecr_image - Query Amazon ECR Images using SQL

The Amazon Elastic Container Registry (ECR) Images are Docker images that are stored within AWS's managed and highly available registry. ECR Images allow you to easily store, manage, and deploy Docker container images in a secure environment. They are integrated with AWS Identity and Access Management (IAM) for resource-level control and support for private Docker repositories.

Table Usage Guide

The aws_ecr_image table in Steampipe provides you with information about Images within Amazon Elastic Container Registry (ECR). This table allows you, as a DevOps engineer, to query image-specific details, including image tags, push timestamps, image sizes, and associated metadata. You can utilize this table to gather insights on images, such as image scan findings, image vulnerability details, verification of image tags, and more. The schema outlines the various attributes of the ECR image for you, including the image digest, image tags, image scan status, and associated tags.

Examples

Basic info

Explore the details of your AWS Elastic Container Registry (ECR) images, like when they were last updated and their size, to better manage your resources. This can help in identifying outdated or oversized images, thus optimizing your ECR utilization.

select
  repository_name,
  image_digest,
  image_pushed_at,
  image_size_in_bytes,
  registry_id,
  image_scan_status,
  image_tags
from
  aws_ecr_image;
select
  repository_name,
  image_digest,
  image_pushed_at,
  image_size_in_bytes,
  registry_id,
  image_scan_status,
  image_tags
from
  aws_ecr_image;

List image scan findings

Identify instances where your repository images might have vulnerabilities by examining the severity of scan findings. This allows you to assess the security of your images and take necessary actions based on the severity of the findings.

select
  repository_name,
  image_scan_findings_summary ->> 'FindingSeverityCounts' as finding_severity_counts,
  image_scan_findings_summary ->> 'ImageScanCompletedAt' as image_scan_completed_at,
  image_scan_findings_summary ->> 'VulnerabilitySourceUpdatedAt' as vulnerability_source_updated_at
from
  aws_ecr_image;
select
  repository_name,
  json_extract(image_scan_findings_summary, '$.FindingSeverityCounts') as finding_severity_counts,
  json_extract(image_scan_findings_summary, '$.ImageScanCompletedAt') as image_scan_completed_at,
  json_extract(image_scan_findings_summary, '$.VulnerabilitySourceUpdatedAt') as vulnerability_source_updated_at
from
  aws_ecr_image;

List image tags for the images

Explore which image tags are associated with the images in your AWS ECR repositories. This can help you manage and organize your resources more effectively.

select
  repository_name,
  registry_id,
  image_digest,
  image_tags
from
  aws_ecr_image;
select
  repository_name,
  registry_id,
  image_digest,
  image_tags
from
  aws_ecr_image;

List images pushed in last 10 days for a repository

Determine the images that have been uploaded to a specific repository in the last 10 days. This is useful for tracking recent updates or additions to the repository.

select
  repository_name,
  image_digest,
  image_pushed_at,
  image_size_in_bytes
from
  aws_ecr_image
where
  image_pushed_at >= now() - interval '10' day
and
  repository_name = 'test1';
select
  repository_name,
  image_digest,
  image_pushed_at,
  image_size_in_bytes
from
  aws_ecr_image
where
  image_pushed_at >= datetime('now','-10 day')
and
  repository_name = 'test1';

List images for repositories created in the last 20 days

Explore recently created repositories and the images they contain. This query is useful for keeping track of new content and managing resources within a 20-day timeframe.

select
  i.repository_name as repository_name,
  r.repository_uri as repository_uri,
  i.image_digest as image_digest,
  i.image_tags as image_tags
from
  aws_ecr_image as i,
  aws_ecr_repository as r
where
  i.repository_name = r.repository_name
and
  r.created_at >= now() - interval '20' day;
select
  i.repository_name as repository_name,
  r.repository_uri as repository_uri,
  i.image_digest as image_digest,
  i.image_tags as image_tags
from
  aws_ecr_image as i,
  aws_ecr_repository as r
where
  i.repository_name = r.repository_name
and
  r.created_at >= datetime('now', '-20 days');

Get repository policy for each image's repository

Determine the access policies associated with each image's repository in AWS Elastic Container Registry (ECR). This can help to identify potential security risks, such as open access to sensitive images.

select
  i.repository_name as repository_name,
  r.repository_uri as repository_uri,
  i.image_digest as image_digest,
  i.image_tags as image_tags,
  s ->> 'Effect' as effect,
  s ->> 'Action' as action,
  s ->> 'Condition' as condition,
  s ->> 'Principal' as principal
from
  aws_ecr_image as i,
  aws_ecr_repository as r,
  jsonb_array_elements(r.policy -> 'Statement') as s
where
  i.repository_name = r.repository_name;
select
  i.repository_name as repository_name,
  r.repository_uri as repository_uri,
  i.image_digest as image_digest,
  i.image_tags as image_tags,
  json_extract(s.value, '$.Effect') as effect,
  json_extract(s.value, '$.Action') as action,
  json_extract(s.value, '$.Condition') as condition,
  json_extract(s.value, '$.Principal') as principal
from
  aws_ecr_image as i,
  aws_ecr_repository as r,
  json_each(r.policy, '$.Statement') as s
where
  i.repository_name = r.repository_name;

Scan images with trivy for a particular repository

This example is used to analyze the security vulnerabilities of images in a specific repository. It helps in proactively identifying and addressing potential security issues, thereby enhancing the overall safety of your applications.

select
  artifact_name,
  artifact_type,
  metadata,
  results
from
  trivy_scan_artifact as a,
  aws_ecr_image as i
where
  artifact_name = image_uri
  and repository_name = 'hello';
select
  artifact_name,
  artifact_type,
  metadata,
  results
from
  trivy_scan_artifact as a,
  aws_ecr_image as i
where
  artifact_name = image_uri
  and repository_name = 'hello';
title description
Steampipe Table: aws_ecr_image_scan_finding - Query Amazon Elastic Container Registry (ECR) Image Scan Findings using SQL
Allows users to query Amazon ECR Image Scan Findings to retrieve detailed information about image scan findings, including attributes such as the severity of the finding, description, and package name where the vulnerability was found.

Table: aws_ecr_image_scan_finding - Query Amazon Elastic Container Registry (ECR) Image Scan Findings using SQL

The Amazon Elastic Container Registry (ECR) Image Scan Findings is a feature of AWS ECR that allows you to identify any software vulnerabilities in your Docker images. It uses the Common Vulnerabilities and Exposures (CVEs) database from the open-source Clair project. It provides detailed findings, severity levels, and a description of the vulnerabilities.

Table Usage Guide

The aws_ecr_image_scan_finding table in Steampipe provides you with information about Image Scan Findings within Amazon Elastic Container Registry (ECR). This table allows you, as a DevOps engineer, to query specific details about image scan findings, including attributes such as the severity of the finding, description, and package name where the vulnerability was found. You can utilize this table to gather insights on image scan findings, such as identifying high-risk vulnerabilities, verifying package vulnerabilities, and more. The schema outlines the various attributes of the Image Scan Finding for you, including the repository name, image digest, finding severity, and associated metadata.

Important Notes

  • You or your roles that have the AWS managed ReadOnlyAccess policy attached also need to attach the AWS managed AmazonInspector2ReadOnlyAccess policy to query this table.

Examples

List scan findings for an image

Identify potential vulnerabilities in a specific image within a repository. This assists in enhancing the security by highlighting areas of concern and providing insights into the severity and nature of the detected issues.

select
  repository_name,
  image_tag,
  name,
  severity,
  description,
  attributes,
  uri,
  image_scan_status,
  image_scan_completed_at,
  vulnerability_source_updated_at
from
  aws_ecr_image_scan_finding
where
  repository_name = 'my-repo'
  and image_tag = 'my-image-tag';
select
  repository_name,
  image_tag,
  name,
  severity,
  description,
  attributes,
  uri,
  image_scan_status,
  image_scan_completed_at,
  vulnerability_source_updated_at
from
  aws_ecr_image_scan_finding
where
  repository_name = 'my-repo'
  and image_tag = 'my-image-tag';

Get CVEs for all images pushed in the last 24 hours

Explore potential vulnerabilities in your system by identifying Common Vulnerabilities and Exposures (CVEs) in all images that have been pushed in the last 24 hours. This is particularly useful for maintaining system security and identifying areas that may need immediate attention or updates.

select
  f.repository_name,
  f.image_tag,
  f.name,
  f.severity,
  jsonb_pretty(f.attributes) as attributes
from
  (
    select
      repository_name,
      jsonb_array_elements_text(image_tags) as image_tag
    from
      aws_ecr_image as i
    where
      i.image_pushed_at > now() - interval '24' hour
  )
  images
  left outer join
    aws_ecr_image_scan_finding as f
    on images.repository_name = f.repository_name
    and images.image_tag = f.image_tag;
select
  f.repository_name,
  f.image_tag,
  f.name,
  f.severity,
  f.attributes as attributes
from
  (
    select
      repository_name,
      json_each.value as image_tag
    from
      aws_ecr_image as i,
      json_each(i.image_tags)
    where
      i.image_pushed_at > datetime('now', '-24 hours')
  )
  images
  left outer join
    aws_ecr_image_scan_finding as f
    on images.repository_name = f.repository_name
    and images.image_tag = f.image_tag;
title description
Steampipe Table: aws_ecr_registry_scanning_configuration - Query AWS ECR Registry Scanning Configuration using SQL
Allows users to query AWS ECR Registry Scanning Configuration at the private registry level on a per-region basis.

Table: aws_ecr_registry_scanning_configuration - Query AWS ECR Registry Scanning Configuration using SQL

The AWS ECR Registry Scanning Configurations are defined at the private registry level on a per-region basis. These refer to the settings and policies that govern how Amazon ECR scans your container images for vulnerabilities. Amazon ECR integrates with the Amazon ECR image scanning feature, which automatically scans your Docker and OCI images for software vulnerabilities.

Table Usage Guide

The aws_ecr_registry_scanning_configuration table in Steampipe provides you with information about the scanning configurations of Amazon Elastic Container Registry (ECR). This table allows you, as a cloud administrator, security team member, or developer, to query the scanning rules associated with the registry. You can utilize this table to gather insights on scanning configurations, such as the rules, the repository filters, and the region name. The schema outlines the various attributes of the scanning configurations for you, including the region, rules, repository filters, scan type and scan frequency.

Examples

Basic configuration info

Analyze the configuration to understand that Amazon ECR scans your container images for vulnerabilities. This is essential for several reasons, primarily centered around security, compliance, and operational efficiency in managing container images.

select
  registry_id,
  jsonb_pretty(scanning_configuration),
  region
from
  aws_ecr_registry_scanning_configuration;
select
  registry_id,
  scanning_configuration,
  region
from
  aws_ecr_registry_scanning_configuration;

Configuration info for a particular region

Determine the scanning configuration of container images for a specific region. This query is beneficial for understanding the scanning configuration of your container images in that particular region.

select
  registry_id,
  jsonb_pretty(scanning_configuration),
  region
from
  aws_ecr_registry_scanning_configuration
where
  region = 'ap-south-1';
select
  registry_id,
  scanning_configuration,
  region
from
  aws_ecr_registry_scanning_configuration
where
  region = 'ap-south-1';

List the regions where enhanced scanning is enabled

Identify regions where the enhanced scanning is enabled for container images. This helps determine whether enhanced vulnerability scanning features are available through integrations with AWS services or third-party tools.

select
  registry_id,
  region
from
  aws_ecr_registry_scanning_configuration
where
  scanning_configuration ->> 'ScanType' = 'ENHANCED'
select
  registry_id,
  region
from
  aws_ecr_registry_scanning_configuration
where
  json_extract(scanning_configuration, '$.ScanType') = 'ENHANCED';
title description
Steampipe Table: aws_ecr_repository - Query AWS ECR Repositories using SQL
Allows users to query AWS Elastic Container Registry (ECR) Repositories and retrieve detailed information about each repository.

Table: aws_ecr_repository - Query AWS ECR Repositories using SQL

The AWS ECR Repository is a managed docker container registry service provided by Amazon Web Services. It makes it easy for developers to store, manage, and deploy Docker container images. Amazon ECR eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure.

Table Usage Guide

The aws_ecr_repository table in Steampipe provides you with information about repositories within AWS Elastic Container Registry (ECR). This table allows you, as a DevOps engineer, to query repository-specific details, including repository ARN, repository URI, and creation date. You can utilize this table to gather insights on repositories, such as repository policies, image scanning configurations, image tag mutability, and more. The schema outlines the various attributes of the ECR repository for you, including the repository name, creation date, and associated tags.

Examples

Basic info

Explore which Elastic Container Registry (ECR) repositories are available in your AWS account and determine their associated details such as creation date and region. This can be beneficial in managing your repositories and understanding their distribution across different regions.

select
  repository_name,
  registry_id,
  arn,
  repository_uri,
  created_at,
  region,
  account_id
from
  aws_ecr_repository;
select
  repository_name,
  registry_id,
  arn,
  repository_uri,
  created_at,
  region,
  account_id
from
  aws_ecr_repository;

List repositories which are not using Customer Managed Keys (CMK) for encryption

Determine the areas in which repositories are not utilizing Customer Managed Keys for encryption. This is useful for enhancing security measures by identifying potential vulnerabilities in your encryption methods.

select
  repository_name,
  encryption_configuration ->> 'EncryptionType' as encryption_type,
  encryption_configuration ->> 'KmsKey' as kms_key
from
  aws_ecr_repository
where
  encryption_configuration ->> 'EncryptionType' = 'AES256';
select
  repository_name,
  json_extract(encryption_configuration, '$.EncryptionType') as encryption_type,
  json_extract(encryption_configuration, '$.KmsKey') as kms_key
from
  aws_ecr_repository
where
  json_extract(encryption_configuration, '$.EncryptionType') = 'AES256';

List repositories with automatic image scanning disabled

Identify instances where automatic image scanning is disabled in repositories. This is useful to ensure security measures are consistently applied across all repositories.

select
  repository_name,
  image_scanning_configuration ->> 'ScanOnPush' as scan_on_push
from
  aws_ecr_repository
where
  image_scanning_configuration ->> 'ScanOnPush' = 'false';
select
  repository_name,
  json_extract(image_scanning_configuration, '$.ScanOnPush') as scan_on_push
from
  aws_ecr_repository
where
  json_extract(image_scanning_configuration, '$.ScanOnPush') = 'false';

List images for each repository

Determine the images associated with each repository to understand their size, push time, last pull time, and scan status. This can help in managing repository resources, tracking image usage, and ensuring security compliance.

select
  r.repository_name as repository_name,
  i.image_digest as image_digest,
  i.image_tags as image_tags,
  i.image_pushed_at as image_pushed_at,
  i.image_size_in_bytes as image_size_in_bytes,
  i.last_recorded_pull_time as last_recorded_pull_time,
  i.registry_id as registry_id,
  i.image_scan_status as image_scan_status
from
  aws_ecr_repository as r,
  aws_ecr_image as i
where
  r.repository_name = i.repository_name;
select
  r.repository_name as repository_name,
  i.image_digest as image_digest,
  i.image_tags as image_tags,
  i.image_pushed_at as image_pushed_at,
  i.image_size_in_bytes as image_size_in_bytes,
  i.last_recorded_pull_time as last_recorded_pull_time,
  i.registry_id as registry_id,
  i.image_scan_status as image_scan_status
from
  aws_ecr_repository as r
join
  aws_ecr_image as i
on
  r.repository_name = i.repository_name;

List images with failed scans

Identify instances where image scans have failed in your AWS ECR repositories. This can help in diagnosing and rectifying issues related to image scanning, thereby improving the security and reliability of your container images.

select
  r.repository_name as repository_name,
  i.image_digest as image_digest,
  i.image_scan_status as image_scan_status
from
  aws_ecr_repository as r,
  aws_ecr_image as i
where
  r.repository_name = i.repository_name
  and i.image_scan_status ->> 'Status' = 'FAILED';
select
  r.repository_name as repository_name,
  i.image_digest as image_digest,
  json_extract(i.image_scan_status, '$.Status') as image_scan_status
from
  aws_ecr_repository as r
join
  aws_ecr_image as i
on
  r.repository_name = i.repository_name
where
  json_extract(i.image_scan_status, '$.Status') = 'FAILED';

List repositories whose tag immutability is disabled

Determine the areas in which image tag immutability is disabled within your repositories. This allows you to identify and manage potential vulnerabilities in your AWS Elastic Container Registry.

select
  repository_name,
  image_tag_mutability
from
  aws_ecr_repository
where
  image_tag_mutability = 'IMMUTABLE';
select
  repository_name,
  image_tag_mutability
from
  aws_ecr_repository
where
  image_tag_mutability = 'IMMUTABLE';

List repositories whose lifecycle policy rule is not configured to remove untagged and old images

Determine the areas in which repositories are not configured to automatically clean up untagged and old images. This can help in managing storage and avoiding unnecessary costs associated with unused or outdated images.

select
  repository_name,
  r -> 'selection' ->> 'tagStatus' as tag_status,
  r -> 'selection' ->> 'countType' as count_type
from
  aws_ecr_repository,
  jsonb_array_elements(lifecycle_policy -> 'rules') as r
where
  (
    (r -> 'selection' ->> 'tagStatus' <> 'untagged')
    and (
      r -> 'selection' ->> 'countType' <> 'sinceImagePushed'
    )
  );
select
  repository_name,
  json_extract(r.value, '$.selection.tagStatus') as tag_status,
  json_extract(r.value, '$.selection.countType') as count_type
from
  aws_ecr_repository,
  json_each(lifecycle_policy, 'rules') as r
where
  (
    (json_extract(r.value, '$.selection.tagStatus') <> 'untagged')
    and (
      json_extract(r.value, '$.selection.countType') <> 'sinceImagePushed'
    )
  );

List repository policy statements that grant full access for each repository

Identify instances where full access has been granted to each repository. This is useful to review and manage access permissions, ensuring optimal security and control over your data repositories.

select
  title,
  p as principal,
  a as action,
  s ->> 'Effect' as effect,
  s -> 'Condition' as conditions
from
  aws_ecr_repository,
  jsonb_array_elements(policy -> 'Statement') as s,
  jsonb_array_elements_text(s -> 'Principal' -> 'AWS') as p,
  jsonb_array_elements_text(s -> 'Action') as a
where
  s ->> 'Effect' = 'Allow'
  and a in ('*', 'ecr:*');
select
  title,
  json_extract(p.value, '$') as principal,
  json_extract(a.value, '$') as action,
  json_extract(s.value, '$.Effect') as effect,
  json_extract(s.value, '$.Condition') as conditions
from
  aws_ecr_repository,
  json_each(policy, '$.Statement') as s,
  json_each(json_extract(s.value, '$.Principal.AWS')) as p,
  json_each(json_extract(s.value, '$.Action')) as a
where
  json_extract(s.value, '$.Effect') = 'Allow'
  and (
    json_extract(a.value, '$') = '*'
    or json_extract(a.value, '$') = 'ecr:*'
  );

List repository scanning configuration settings

Determine the frequency and triggers for scanning within your repositories to optimize security checks and resource management. This enables you to understand the efficiency and effectiveness of your scanning configurations.

select
  repository_name,
  r ->> 'AppliedScanFilters' as applied_scan_filters,
  r ->> 'RepositoryArn' as repository_arn,
  r ->> 'ScanFrequency' as scan_frequency,
  r ->> 'ScanOnPush' as scan_on_push
from
  aws_ecr_repository,
  jsonb_array_elements(repository_scanning_configuration -> 'ScanningConfigurations') as r;

select
  repository_name,
  json_extract(r.value, '$.AppliedScanFilters') as applied_scan_filters,
  json_extract(r.value, '$.RepositoryArn') as repository_arn,
  json_extract(r.value, '$.ScanFrequency') as scan_frequency,
  json_extract(r.value, '$.ScanOnPush') as scan_on_push
from
  aws_ecr_repository,
  json_each(repository_scanning_configuration, '$.ScanningConfigurations') as r;

List repositories where the scanning frequency is set to manual

Determine the areas in your AWS ECR repositories where the scanning frequency is manually set. This allows you to identify instances where automated scanning is not enabled, potentially leaving your repositories vulnerable to undetected issues.

select
  repository_name,
  r ->> 'RepositoryArn' as repository_arn,
  r ->> 'ScanFrequency' as scan_frequency
from
  aws_ecr_repository,
  jsonb_array_elements(repository_scanning_configuration -> 'ScanningConfigurations') as r
where
  r ->> 'ScanFrequency' = 'MANUAL';
select
  repository_name,
  json_extract(r.value, '$.RepositoryArn') as repository_arn,
  json_extract(r.value, '$.ScanFrequency') as scan_frequency
from
  aws_ecr_repository,
  json_each(repository_scanning_configuration, '$.ScanningConfigurations') as r
where
  json_extract(r.value, '$.ScanFrequency') = 'MANUAL';

List repositories with scan-on-push is disabled

Identify instances where the scan-on-push feature is disabled in your repositories. This can help improve your security measures by ensuring all repositories are scanned for vulnerabilities upon each push.

select
  repository_name,
  r ->> 'RepositoryArn' as repository_arn,
  r ->> 'ScanOnPush' as scan_on_push
from
  aws_ecr_repository,
  jsonb_array_elements(repository_scanning_configuration -> 'ScanningConfigurations') as r
where
 r ->> 'ScanOnPush' = 'false';
select
  repository_name,
  json_extract(r.value, '$.RepositoryArn') as repository_arn,
  json_extract(r.value, '$.ScanOnPush') as scan_on_push
from
  aws_ecr_repository,
  json_each(repository_scanning_configuration, 'ScanningConfigurations') as r
where
 json_extract(r.value, '$.ScanOnPush') = 'false';
title description
Steampipe Table: aws_ecrpublic_repository - Query AWS Elastic Container Registry Public Repository using SQL
Allows users to query AWS Elastic Container Registry Public Repository to get detailed information about each ECR public repository within an AWS account.

Table: aws_ecrpublic_repository - Query AWS Elastic Container Registry Public Repository using SQL

The AWS Elastic Container Registry Public Repository is a service that allows you to store, manage, and deploy Docker images. It eliminates the need to operate your own container repositories or worry about scaling the underlying infrastructure. It is a fully-managed service that makes it easy to store, manage, share, and deploy your container images and artifacts anywhere.

Table Usage Guide

The aws_ecrpublic_repository table in Steampipe provides you with information about each ECR public repository within your AWS account. This table allows you, as a DevOps engineer, to query repository-specific details, including the repository ARN, repository name, creation date, and associated metadata. You can use this table to gather insights on repositories, such as the number of images per repository, the status of each repository, and more. The schema outlines the various attributes of the ECR public repository for you, including the repository ARN, creation date, image tag mutability, and associated tags.

Examples

Basic info

Explore which public repositories are available in your AWS Elastic Container Registry. This can help you manage and track your container images, understand their origins and creation times, and identify the specific regions and accounts associated with each repository.

select
  repository_name,
  registry_id,
  arn,
  repository_uri,
  created_at,
  region,
  account_id
from
  aws_ecrpublic_repository;
select
  repository_name,
  registry_id,
  arn,
  repository_uri,
  created_at,
  region,
  account_id
from
  aws_ecrpublic_repository;

List repository policy statements that grant full access for each repository

Determine the areas in which repository policy statements are granting full access. This is useful for security audits and ensuring that access permissions are correctly configured.

select
  title,
  p as principal,
  a as action,
  s ->> 'Effect' as effect,
  s -> 'Condition' as conditions
from
  aws_ecrpublic_repository,
  jsonb_array_elements(policy -> 'Statement') as s,
  jsonb_array_elements_text(s -> 'Principal' -> 'AWS') as p,
  jsonb_array_elements_text(s -> 'Action') as a
where
  s ->> 'Effect' = 'Allow'
  and a in ('*', 'ecr-public:*');
select
  title,
  json_extract(p.value, '$') as principal,
  json_extract(a.value, '$') as action,
  json_extract(s.value, '$.Effect') as effect,
  json_extract(s.value, '$.Condition') as conditions
from
  aws_ecrpublic_repository,
  json_each(json_extract(policy, '$.Statement')) as s,
  json_each(json_extract(s.value, '$.Principal.AWS')) as p,
  json_each(json_extract(s.value, '$.Action')) as a
where
  json_extract(s.value, '$.Effect') = 'Allow'
  and json_extract(a.value, '$') in ('*', 'ecr-public:*');
title description
Steampipe Table: aws_ecs_cluster - Query AWS ECS Clusters using SQL
Allows users to query AWS ECS Clusters to retrieve detailed information about each cluster's configuration, status, and associated resources.

Table: aws_ecs_cluster - Query AWS ECS Clusters using SQL

The AWS ECS Cluster is a regional, logical grouping of services in Amazon Elastic Container Service (ECS). It allows you to manage and scale a group of tasks or services, and determine their placement across a set of Amazon EC2 instances. ECS Clusters help in running applications and services on a managed cluster of EC2 instances, eliminating the need to install, operate, and scale your own cluster management infrastructure.

Table Usage Guide

The aws_ecs_cluster table in Steampipe provides you with information about clusters within AWS Elastic Container Service (ECS). This table allows you, as a DevOps engineer, to query cluster-specific details, including its configuration, status, and associated resources. You can utilize this table to gather insights on clusters, such as cluster capacity providers, default capacity provider strategy, and more. The schema outlines for you the various attributes of the ECS cluster, including the cluster ARN, cluster name, status, and associated tags.

Examples

Basic info

Analyze the settings to understand the overall status and active services of your AWS ECS clusters. This is useful for maintaining optimal cluster performance and identifying any potential issues.

select
  cluster_arn,
  cluster_name,
  active_services_count,
  attachments,
  attachments_status,
  status
from
  aws_ecs_cluster;
select
  cluster_arn,
  cluster_name,
  active_services_count,
  attachments,
  attachments_status,
  status
from
  aws_ecs_cluster;

List clusters that have failed to provision resources

Identify instances where resource provisioning has failed in certain clusters. This can be useful in troubleshooting and understanding the reasons for failure in resource allocation.

select
  cluster_arn,
  status
from
  aws_ecs_cluster
where
  status = 'FAILED';
select
  cluster_arn,
  status
from
  aws_ecs_cluster
where
  status = 'FAILED';

Get details of resources attached to each cluster

Explore the status and type of resources linked to each cluster in your AWS ECS setup. This helps you monitor the health and functionality of various components within your clusters.

select
  cluster_arn,
  attachment ->> 'id' as attachment_id,
  attachment ->> 'status' as attachment_status,
  attachment ->> 'type' as attachment_type
from
  aws_ecs_cluster,
  jsonb_array_elements(attachments) as attachment;
select
  cluster_arn,
  json_extract(attachment.value, '$.id') as attachment_id,
  json_extract(attachment.value, '$.status') as attachment_status,
  json_extract(attachment.value, '$.type') as attachment_type
from
  aws_ecs_cluster,
  json_each(attachments) as attachment;

List clusters with CloudWatch Container Insights disabled

Determine the areas in your AWS ECS clusters where CloudWatch Container Insights is disabled. This is beneficial in understanding and managing the monitoring capabilities of your clusters.

select
  cluster_arn,
  setting ->> 'Name' as name,
  setting ->> 'Value' as value
from
  aws_ecs_cluster,
  jsonb_array_elements(settings) as setting
where
  setting ->> 'Value' = 'disabled';
select
  cluster_arn,
  json_extract(setting.value, '$.Name') as name,
  json_extract(setting.value, '$.Value') as value
from
  aws_ecs_cluster,
  json_each(settings) as setting
where
  json_extract(setting, '$.Value') = 'disabled';
title description
Steampipe Table: aws_ecs_cluster_metric_cpu_utilization - Query AWS ECS Cluster Metrics using SQL
Allows users to query ECS Cluster CPU Utilization Metrics for a specified period.

Table: aws_ecs_cluster_metric_cpu_utilization - Query AWS ECS Cluster Metrics using SQL

The AWS ECS Cluster Metrics service allows you to monitor and collect data on CPU utilization in your Amazon Elastic Container Service (ECS) clusters. This feature provides insights into the efficiency of your clusters and can be used to optimize resource usage. You can query these metrics using SQL, allowing for easy integration and analysis of the data.

Table Usage Guide

The aws_ecs_cluster_metric_cpu_utilization table in Steampipe provides you with information about CPU utilization metrics of AWS Elastic Container Service (ECS) clusters. This table allows you, as a DevOps engineer, system administrator, or other technical professional, to query CPU utilization-specific details, including the average, maximum, and minimum CPU utilization, along with the corresponding timestamps. You can utilize this table to monitor CPU usage trends, identify potential performance issues, and optimize resource allocation. The schema outlines various attributes of the CPU utilization metric, including the cluster name, period, timestamp, and average, minimum, and maximum CPU utilization.

The aws_ecs_cluster_metric_cpu_utilization table provides you with metric statistics at 5-minute intervals for the most recent 5 days.

Examples

Basic info

Analyze the CPU utilization metrics of AWS ECS clusters over time to understand resource usage trends and optimize cluster performance. This information could be useful in identifying patterns, planning capacity, and managing costs effectively.

select
  cluster_name,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization
order by
  cluster_name,
  timestamp;
select
  cluster_name,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization
order by
  cluster_name,
  timestamp;

CPU Over 80% average

Identify instances where the average CPU utilization of your AWS ECS clusters exceeds 80%. This can help in managing resources effectively, ensuring optimal performance and avoiding potential bottlenecks.

select
  cluster_name,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization
where
  average > 80
order by
  cluster_name,
  timestamp;
select
  cluster_name,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization
where
  average > 80
order by
  cluster_name,
  timestamp;
title description
Steampipe Table: aws_ecs_cluster_metric_cpu_utilization_daily - Query AWS ECS Cluster Metrics using SQL
Allows users to query AWS Elastic Container Service (ECS) Cluster Metrics, specifically CPU utilization on a daily basis.

Table: aws_ecs_cluster_metric_cpu_utilization_daily - Query AWS ECS Cluster Metrics using SQL

The AWS ECS Cluster is a logical grouping of tasks or services. It allows you to manage and scale a set of services or tasks together in an AWS environment. The CPU Utilization Metric provides data about the CPU usage of the services or tasks in the cluster, helping you monitor and optimize resource allocation on a daily basis.

Table Usage Guide

The aws_ecs_cluster_metric_cpu_utilization_daily table in Steampipe provides you with information about CPU utilization metrics within AWS Elastic Container Service (ECS) clusters. This table allows you, as a DevOps engineer, to query CPU utilization details on a daily basis, including the average, maximum, and minimum utilization. You can utilize this table to monitor and analyze CPU usage trends, identify potential performance issues, and optimize resource allocation. The schema outlines the various attributes of the CPU utilization metric for you, including the timestamp, period, unit, and statistical values.

Examples

Basic info

Explore the daily CPU usage patterns across your AWS ECS clusters. This can help you understand resource utilization trends and plan for capacity adjustments.

select
  cluster_name,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_daily
order by
  cluster_name,
  timestamp;
select
  cluster_name,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_daily
order by
  cluster_name,
  timestamp;

CPU Over 80% average

Explore instances where the average CPU utilization exceeds 80% in AWS ECS clusters. This can help in identifying potential performance issues and aid in capacity planning.

select
  cluster_name,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_daily
where
  average > 80
order by
  cluster_name,
  timestamp;
select
  cluster_name,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_daily
where
  average > 80
order by
  cluster_name,
  timestamp;

CPU daily average < 1%

Identify instances where the average daily CPU utilization of AWS ECS clusters is less than 1%. This can help in understanding underutilized resources and potentially save costs by downsizing or eliminating unnecessary clusters.

select
  cluster_name,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_daily
where
  average < 1
order by
  cluster_name,
  timestamp;
select
  cluster_name,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_daily
where
  average < 1
order by
  cluster_name,
  timestamp;
title description
Steampipe Table: aws_ecs_cluster_metric_cpu_utilization_hourly - Query AWS ECS Cluster Metrics using SQL
Allows users to query AWS ECS Cluster CPU Utilization Metrics on an hourly basis.

Table: aws_ecs_cluster_metric_cpu_utilization_hourly - Query AWS ECS Cluster Metrics using SQL

The AWS ECS Cluster Metrics is a feature of Amazon Elastic Container Service (ECS) that provides CPU utilization data. It allows you to monitor and troubleshoot your applications running on ECS. The CPU Utilization metric represents the percentage of total CPU units that are currently in use on a cluster for an hour.

Table Usage Guide

The aws_ecs_cluster_metric_cpu_utilization_hourly table in Steampipe gives you information about the CPU utilization metrics of AWS ECS (Elastic Container Service) clusters on an hourly basis. This table allows you, as a DevOps engineer, data analyst, or other technical professional, to query cluster-specific details, including the average, maximum, and minimum CPU utilization percentages. You can utilize this table to monitor the performance of your ECS clusters, identify potential resource bottlenecks, and optimize resource allocation. The schema outlines the various attributes of the ECS cluster CPU utilization for you, including the cluster name, timestamp, average utilization, maximum utilization, and minimum utilization.

The aws_ecs_cluster_metric_cpu_utilization_hourly table provides you with metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Explore the performance of various AWS ECS clusters by tracking their CPU utilization over time. This allows for effective resource management and helps in identifying potential performance issues.

select
  cluster_name,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_hourly
order by
  cluster_name,
  timestamp;
select
  cluster_name,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_hourly
order by
  cluster_name,
  timestamp;

CPU Over 80% average

Discover the instances where the average CPU utilization exceeds 80% in your AWS ECS clusters, allowing you to identify potential performance issues and optimize resource allocation.

select
  cluster_name,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_hourly
where
  average > 80
order by
  cluster_name,
  timestamp;
select
  cluster_name,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_hourly
where
  average > 80
order by
  cluster_name,
  timestamp;

CPU hourly average < 1%

Determine the areas in which AWS ECS clusters are underutilized, by pinpointing instances where the average CPU usage is less than 1% on an hourly basis. This allows for efficient resource management and cost optimization by identifying potential opportunities for downsizing.

select
  cluster_name,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_hourly
where
  average < 1
order by
  cluster_name,
  timestamp;
select
  cluster_name,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_ecs_cluster_metric_cpu_utilization_hourly
where
  average < 1
order by
  cluster_name,
  timestamp;
title description
Steampipe Table: aws_ecs_container_instance - Query AWS ECS Container Instance using SQL
Allows users to query AWS ECS Container Instance to retrieve data about the Amazon Elastic Container Service (ECS) container instances. This includes information about the container instance ARN, status, running tasks count, pending tasks count, agent connected status, and more.

Table: aws_ecs_container_instance - Query AWS ECS Container Instance using SQL

The AWS ECS Container Instance is a resource within the Amazon Elastic Container Service (ECS). It refers to a single EC2 instance that is part of an ECS cluster, which runs containerized applications. It provides the necessary infrastructure to manage, schedule, and run Docker containers on a cluster.

Table Usage Guide

The aws_ecs_container_instance table in Steampipe provides you with information about the Amazon Elastic Container Service (ECS) container instances. This table allows you, as a DevOps engineer, to query container-specific details, including the container instance ARN, status, running tasks count, pending tasks count, agent connected status, and more. You can utilize this table to gather insights on container instances, such as the number of running or pending tasks, the status of the agent connection, and more. The schema outlines the various attributes of the ECS container instance for you, including the instance ARN, instance type, launch type, and associated tags.

Examples

Basic info

Determine the areas in which your AWS Elastic Container Service (ECS) instances are running and their status. This can help you identify instances where there are pending tasks, providing insights for potential performance improvement.

select
  arn,
  ec2_instance_id,
  status,
  status_reason,
  running_tasks_count,
  pending_tasks_count
from
  aws_ecs_container_instance;
select
  arn,
  ec2_instance_id,
  status,
  status_reason,
  running_tasks_count,
  pending_tasks_count
from
  aws_ecs_container_instance;

List container instances that have failed registration

Determine the areas in which container instances have failed to register within the AWS ECS service. This is useful in diagnosing and resolving issues that could potentially disrupt your application's performance or availability.

select
  arn,
  status,
  status_reason
from
  aws_ecs_container_instance
where
  status = 'REGISTRATION_FAILED';
select
  arn,
  status,
  status_reason
from
  aws_ecs_container_instance
where
  status = 'REGISTRATION_FAILED';

Get details of resources attached to each container instance

Explore which resources are linked to each container instance in AWS ECS to better manage and track resources. This can help in identifying any potential issues or inefficiencies in resource allocation.

select
  arn,
  attachment ->> 'id' as attachment_id,
  attachment ->> 'status' as attachment_status,
  attachment ->> 'type' as attachment_type
from
  aws_ecs_container_instance,
  jsonb_array_elements(attachments) as attachment;
select
  arn,
  json_extract(attachment.value, '$.id') as attachment_id,
  json_extract(attachment.value, '$.status') as attachment_status,
  json_extract(attachment.value, '$.type') as attachment_type
from
  aws_ecs_container_instance,
  json_each(attachments) as attachment;

List container instances with using a given AMI

Determine the areas in which specific Amazon Machine Images (AMIs) are being used within your container instances. This is particularly useful for identifying potential security risks or for troubleshooting purposes.

select
  arn,
  setting ->> 'Name' as name,
  setting ->> 'Value' as value
from
  aws_ecs_container_instance,
  jsonb_array_elements(attributes) as setting
where
  setting ->> 'Name' = 'ecs.ami-id' and
  setting ->> 'Value' = 'ami-0babb0c4a4e5769b8';
select
  arn,
  json_extract(setting.value, '$.Name') as name,
  json_extract(setting.value, '$.Value') as value
from
  aws_ecs_container_instance,
  json_each(attributes) as setting
where
  json_extract(setting, '$.Name') = 'ecs.ami-id' and
  json_extract(setting, '$.Value') = 'ami-0babb0c4a4e5769b8';
title description
Steampipe Table: aws_ecs_service - Query AWS Elastic Container Service using SQL
Allows users to query AWS Elastic Container Service (ECS) to retrieve information about the services within the ECS clusters.

Table: aws_ecs_service - Query AWS Elastic Container Service using SQL

The AWS Elastic Container Service (ECS) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. ECS eliminates the need for you to install, operate, and scale your own cluster management infrastructure. With simple API calls, you can launch and stop Docker-enabled applications, query the complete state of your application, and access many familiar features like security groups, Elastic Load Balancing, EBS volumes, and IAM roles.

Table Usage Guide

The aws_ecs_service table in Steampipe provides you with information about the services within the AWS Elastic Container Service (ECS) clusters. This table lets you, as a DevOps engineer, query service-specific details, including service status, task definitions, and associated metadata. You can utilize this table to gather insights on services, such as service health status, task definitions being used, and more. The schema outlines the various attributes of the ECS service for you, including the service ARN, cluster ARN, task definition, desired count, running count, and associated tags.

Examples

Basic info

Explore the status and details of various tasks within your AWS ECS service. This can help you understand the state of your tasks and identify any potential issues or anomalies.

select
  service_name,
  arn,
  cluster_arn,
  task_definition,
  status
from
  aws_ecs_service;
select
  service_name,
  arn,
  cluster_arn,
  task_definition,
  status
from
  aws_ecs_service;

List services not using the latest version of AWS Fargate platform

Determine the areas in which your services are not utilizing the latest version of the AWS Fargate platform. This can be useful in identifying outdated services that may potentially benefit from an upgrade for enhanced performance and security.

select
  service_name,
  arn,
  launch_type,
  platform_version
from
  aws_ecs_service
where
  launch_type = 'FARGATE'
  and platform_version is not null;
select
  service_name,
  arn,
  launch_type,
  platform_version
from
  aws_ecs_service
where
  launch_type = 'FARGATE'
  and platform_version is not null;

List inactive services

Discover the segments that are inactive within your AWS ECS services. This can be particularly useful when cleaning up or troubleshooting your environment.

select
  service_name,
  arn,
  status
from
  aws_ecs_service
where
  status = 'INACTIVE';
select
  service_name,
  arn,
  status
from
  aws_ecs_service
where
  status = 'INACTIVE';
title description
Steampipe Table: aws_ecs_task - Query AWS ECS Tasks using SQL
Allows users to query AWS ECS Tasks to obtain detailed information about each task, including its status, task definition, cluster, and other related metadata.

Table: aws_ecs_task - Query AWS ECS Tasks using SQL

AWS Elastic Container Service (ECS) Tasks are a running instance of an Amazon ECS task. They are a scalable unit of computing that contain everything needed to run an application on Amazon ECS. ECS tasks can be used to run applications on a managed cluster of Amazon EC2 instances.

Table Usage Guide

The aws_ecs_task table in Steampipe provides you with information about tasks within Amazon Elastic Container Service (ECS). This table enables you, as a DevOps engineer, to query task-specific details, including the current task status, task definition, associated cluster, and other metadata. You can utilize this table to gather insights on tasks, such as tasks that are running, stopped, or pending, tasks associated with specific clusters, and more. The schema outlines the various attributes of the ECS task for you, including the task ARN, last status, task definition ARN, and associated tags.

Examples

Basic info

Determine the status and launch type of tasks within your AWS Elastic Container Service (ECS) to manage and optimize your ECS resources effectively. This can help in maintaining the desired state of your tasks and ensuring they are running as expected.

select
  cluster_name,
  desired_status,
  launch_type,
  task_arn
from
  aws_ecs_task;
select
  cluster_name,
  desired_status,
  launch_type,
  task_arn
from
  aws_ecs_task;

List task attachment details

This query is useful for gaining insights into the status and types of attachments associated with specific tasks within a cluster. This can help in managing and troubleshooting tasks effectively in a real-world scenario.

select
  cluster_name,
  task_arn,
  a ->> 'Id' as attachment_id,
  a ->> 'Status' as attachment_status,
  a ->> 'Type' as attachment_type,
  jsonb_pretty(a -> 'Details') as attachment_details
from
  aws_ecs_task,
  jsonb_array_elements(attachments) as a;
select
  cluster_name,
  task_arn,
  json_extract(a.value, '$.Id') as attachment_id,
  json_extract(a.value, '$.Status') as attachment_status,
  json_extract(a.value, '$.Type') as attachment_type,
  json(a.value, '$.Details') as attachment_details
from
  aws_ecs_task,
  json_each(attachments) as a;

List task protection details

Explore the protection status and expiry dates of tasks within your AWS ECS clusters. This can help ensure all tasks are adequately protected and any expiring protections are promptly renewed.

select
  cluster_name,
  task_arn,
  protection ->> 'ProtectionEnabled' as protection_enabled,
  protection ->> 'ExpirationDate' as protection_expiration_date
from
  aws_ecs_task;
select
  cluster_name,
  task_arn,
  json_extract(protection, '$.ProtectionEnabled') as protection_enabled,
  json_extract(protection, '$.ExpirationDate') as protection_expiration_date
from
  aws_ecs_task;
title description
Steampipe Table: aws_ecs_task_definition - Query AWS ECS Task Definitions using SQL
Allows users to query AWS ECS Task Definitions to gain insights into the configuration of running tasks in an ECS service. The table provides details such as task definition ARN, family, network mode, revision, status, and more.

Table: aws_ecs_task_definition - Query AWS ECS Task Definitions using SQL

The AWS ECS Task Definition is a blueprint that describes how a Docker container should launch. It specifies the Docker image to use for the container, the required resources, and other configurations. Task Definitions are used in conjunction with the Amazon Elastic Container Service (ECS) to run containers reliably on AWS.

Table Usage Guide

The aws_ecs_task_definition table in Steampipe provides you with information about the task definitions within AWS Elastic Container Service (ECS). This table allows you, as a DevOps engineer, to query task-specific details, including the task definition ARN, family, network mode, revision, and status. You can utilize this table to gather insights on task definitions, such as their configuration, associated IAM roles, container definitions, volumes, and more. The schema outlines the various attributes of the ECS task definition for you, including the task definition ARN, family, requires compatibility, and associated tags.

Examples

Basic info

Explore the configuration and status of task definitions in AWS ECS to understand their processing power and network configuration. This can be useful for optimizing resource allocation and network settings for better system performance.

select
  task_definition_arn,
  cpu,
  network_mode,
  title,
  status,
  tags
from
  aws_ecs_task_definition;
select
  task_definition_arn,
  cpu,
  network_mode,
  title,
  status,
  tags
from
  aws_ecs_task_definition;

Count the number of containers attached to each task definitions

Explore the distribution of containers across various task definitions to better manage and optimize the use of resources in an AWS ECS environment.

select
  task_definition_arn,
  jsonb_array_length(container_definitions) as num_of_conatiners
from
  aws_ecs_task_definition;
select
  task_definition_arn,
  json_array_length(container_definitions) as num_of_conatiners
from
  aws_ecs_task_definition;

List containers with elevated privileges on the host container instance

Determine the areas in which containers are operating with elevated privileges within your host container instance. This is useful to identify potential security risks and ensure secure configuration of your container infrastructure.

select
  task_definition_arn,
  cd ->> 'Privileged' as privileged,
  cd ->> 'Name' as container_name
from
  aws_ecs_task_definition,
  jsonb_array_elements(container_definitions) as cd
where
  cd ->> 'Privileged' = 'true';
select
  task_definition_arn,
  json_extract(cd.value, '$.Privileged') as privileged,
  json_extract(cd.value, '$.Name') as container_name
from
  aws_ecs_task_definition,
  json_each(container_definitions) as cd
where
  json_extract(cd.value, '$.Privileged') = 'true';

List task definitions with containers where logging is disabled

This query is useful in identifying all task definitions with containers where logging has been disabled in the AWS ECS system. This can aid in improving security and compliance by enabling you to quickly pinpoint areas where logging should be enabled for better tracking and auditing.

select
  task_definition_arn,
  cd ->> 'Name' as container_name,
  cd ->> 'LogConfiguration' as log_configuration
from
  aws_ecs_task_definition,
  jsonb_array_elements(container_definitions) as cd
where
 cd ->> 'LogConfiguration' is null;
select
  task_definition_arn,
  json_extract(cd.value, '$.Name') as container_name,
  json_extract(cd.value, '$.LogConfiguration') as log_configuration
from
  aws_ecs_task_definition,
  json_each(container_definitions) as cd
where
 json_extract(cd.value, '$.LogConfiguration') is null;
title description
Steampipe Table: aws_efs_access_point - Query Amazon EFS Access Points using SQL
Allows users to query Amazon EFS Access Points, providing detailed information about each access point's configuration, including the file system it is associated with, its access point ID, and other related metadata.

Table: aws_efs_access_point - Query Amazon EFS Access Points using SQL

The Amazon Elastic File System (EFS) Access Points provide a customized view into an EFS file system. They enable applications to use a specific operating system user and group, and a directory in the file system as a root directory. By using EFS Access Points, you can enforce a user identity, permission strategy, and root directory for each application using the file system.

Table Usage Guide

The aws_efs_access_point table in Steampipe provides you with information about Access Points within Amazon Elastic File System (EFS). This table enables you, as a DevOps engineer, system administrator, or other technical professional, to query access point-specific details, including the file system it is associated with, its access point ID, and other related metadata. You can utilize this table to gather insights on access points, such as their operating system type, root directory creation info, and more. The schema outlines the various attributes of the access point for you, including the access point ARN, creation time, life cycle state, and associated tags.

Examples

Basic info

Analyze the settings to understand the status and ownership of various access points within Amazon Elastic File System (EFS). This can help in assessing the elements within your EFS, pinpointing specific locations where changes might be needed.

select
  name,
  access_point_id,
  access_point_arn,
  file_system_id,
  life_cycle_state,
  owner_id,
  root_directory
from
  aws_efs_access_point;
select
  name,
  access_point_id,
  access_point_arn,
  file_system_id,
  life_cycle_state,
  owner_id,
  root_directory
from
  aws_efs_access_point;

List access points for each file system

Identify the access points associated with each file system to gain insights into file ownership and root directory details. This can be useful for managing and auditing file system access within an AWS environment.

select
  name,
  access_point_id,
  file_system_id,
  owner_id,
  root_directory
from
  aws_efs_access_point
select
  name,
  access_point_id,
  file_system_id,
  owner_id,
  root_directory
from
  aws_efs_access_point

List access points in the error lifecycle state

Identify instances where access points in the AWS Elastic File System are in an error state. This could be useful in diagnosing system issues or assessing overall system health.

select
  name,
  access_point_id,
  life_cycle_state,
  file_system_id,
  owner_id,
  root_directory
from
  aws_efs_access_point
where
  life_cycle_state = 'error';
select
  name,
  access_point_id,
  life_cycle_state,
  file_system_id,
  owner_id,
  root_directory
from
  aws_efs_access_point
where
  life_cycle_state = 'error';
title description
Steampipe Table: aws_efs_file_system - Query AWS Elastic File System using SQL
Allows users to query AWS Elastic File System (EFS) file systems, providing detailed information about each file system such as its ID, ARN, creation token, performance mode, and lifecycle state.

Table: aws_efs_file_system - Query AWS Elastic File System using SQL

The AWS Elastic File System (EFS) is a scalable file storage for use with Amazon EC2 instances. It's easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. With EFS, you have the flexibility to store and retrieve data across different AWS regions and availability zones.

Table Usage Guide

The aws_efs_file_system table in Steampipe provides you with information about file systems within AWS Elastic File System (EFS). This table allows you, as a DevOps engineer, to query file system-specific details, including its ID, ARN, creation token, performance mode, lifecycle state, and associated metadata. You can utilize this table to gather insights on file systems, such as their performance mode, lifecycle state, and more. The schema outlines the various attributes of the EFS file system for you, including the file system ID, creation token, tags, and associated mount targets.

Examples

Basic info

Discover the segments that have automatic backups enabled in your AWS Elastic File System (EFS). This helps in assessing the elements within your system that are safeguarded and those that might need additional data protection measures.

select
  name,
  file_system_id,
  owner_id,
  automatic_backups,
  creation_token,
  creation_time,
  life_cycle_state,
  number_of_mount_targets,
  performance_mode,
  throughput_mode
from
  aws_efs_file_system;
select
  name,
  file_system_id,
  owner_id,
  automatic_backups,
  creation_token,
  creation_time,
  life_cycle_state,
  number_of_mount_targets,
  performance_mode,
  throughput_mode
from
  aws_efs_file_system;

List file systems which are not encrypted at rest

Discover the segments of your AWS Elastic File System that are not encrypted, allowing you to identify potential security risks and take necessary action to ensure data protection.

select
  file_system_id,
  encrypted,
  kms_key_id,
  region
from
  aws_efs_file_system
where
  not encrypted;
select
  file_system_id,
  encrypted,
  kms_key_id,
  region
from
  aws_efs_file_system
where
  encrypted = 0;

Get the size of the data stored in each file system

Assess the elements within your file system to understand the distribution of data storage. This is useful for managing storage resources effectively and identifying opportunities for cost optimization.

select
  file_system_id,
  size_in_bytes ->> 'Value' as data_size,
  size_in_bytes ->> 'Timestamp' as data_size_timestamp,
  size_in_bytes ->> 'ValueInIA' as data_size_infrequent_access_storage,
  size_in_bytes ->> 'ValueInStandard' as data_size_standard_storage
from
  aws_efs_file_system;
select
  file_system_id,
  json_extract(size_in_bytes, '$.Value') as data_size,
  json_extract(size_in_bytes, '$.Timestamp') as data_size_timestamp,
  json_extract(size_in_bytes, '$.ValueInIA') as data_size_infrequent_access_storage,
  json_extract(size_in_bytes, '$.ValueInStandard') as data_size_standard_storage
from
  aws_efs_file_system;

List file systems which have root access

Identify instances where file systems have root access, which can be critical in understanding the security posture of your AWS Elastic File System, and ensuring that only authorized users have such elevated privileges.

select
  title,
  p as principal,
  a as action,
  s ->> 'Effect' as effect,
  s -> 'Condition' as conditions
from
  aws_efs_file_system,
  jsonb_array_elements(policy_std -> 'Statement') as s,
  jsonb_array_elements_text(s -> 'Principal' -> 'AWS') as p,
  jsonb_array_elements_text(s -> 'Action') as a
where
  a in ('elasticfilesystem:clientrootaccess');
select
  title,
  json_extract(principal.value, '$') as principal,
  json_extract(action.value, '$') as action,
  json_extract(statement.value, '$.Effect') as effect,
  json_extract(statement.value, '$.Condition') as conditions
from
  aws_efs_file_system,
  json_each(policy_std, '$.Statement') as statement,
  json_each(json_extract(statement.value, '$.Principal.AWS')) as principal,
  json_each(json_extract(statement.value, '$.Action')) as action
where
  json_extract(action.value, '$') = 'elasticfilesystem:clientrootaccess';

List file systems that do not enforce encryption in transit

Discover the segments of your AWS Elastic File System that are not enforcing encryption in transit. This can help improve your system's security by identifying potential vulnerabilities.

select
  title
from
  aws_efs_file_system
where
  title not in (
    select
      title
    from
      aws_efs_file_system,
      jsonb_array_elements(policy_std -> 'Statement') as s,
      jsonb_array_elements_text(s -> 'Principal' -> 'AWS') as p,
      jsonb_array_elements_text(s -> 'Action') as a,
      jsonb_array_elements_text(
        s -> 'Condition' -> 'Bool' -> 'aws:securetransport'
      ) as ssl
    where
      p = '*'
      and s ->> 'Effect' = 'Deny'
      and ssl :: bool = false
  );
select
  title
from
  aws_efs_file_system
where
  title not in (
    select
      title
    from
      aws_efs_file_system
    where
      json_extract(policy_std, '$.Statement[*].Principal.AWS') = '*'
      and json_extract(policy_std, '$.Statement[*].Effect') = 'Deny'
      and json_extract(policy_std, '$.Statement[*].Condition.Bool.aws:securetransport') = 'false'
  );

List file systems with automatic backups enabled

Gain insights into the file systems that have automatic backups enabled. This is useful for ensuring that your data is being regularly backed up for recovery purposes.

select
  name,
  automatic_backups,
  arn,
  file_system_id
from
  aws_efs_file_system
where
  automatic_backups = 'enabled';
select
  name,
  automatic_backups,
  arn,
  file_system_id
from
  aws_efs_file_system
where
  automatic_backups = 'enabled';
title description
Steampipe Table: aws_efs_mount_target - Query AWS EFS Mount Targets using SQL
Allows users to query AWS EFS Mount Targets for detailed information about each mount target's configuration, status, and associated resources.

Table: aws_efs_mount_target - Query AWS EFS Mount Targets using SQL

The AWS EFS Mount Target is a component of Amazon Elastic File System (EFS) that provides a network interface for a file system to connect to. It enables you to mount an Amazon EFS file system in your Amazon EC2 instance. This network interface allows the file system to connect to the network of a VPC.

Table Usage Guide

The aws_efs_mount_target table in Steampipe provides you with information about mount targets within AWS Elastic File System (EFS). This table allows you, as a DevOps engineer, to query mount target-specific details, including the file system ID, mount target ID, subnet ID, and security groups. You can utilize this table to gather insights on mount targets, such as their availability, network interface, and life cycle state. The schema outlines the various attributes of the EFS mount target for you, including the IP address, network interface ID, owner ID, and associated tags.

Examples

Basic info

Explore the status and location of your Amazon EFS mount targets. This query is useful for understanding the availability and lifecycle state of your mount targets, which can help in optimizing resource usage and troubleshooting.

select
  mount_target_id,
  file_system_id,
  life_cycle_state,
  availability_zone_id,
  availability_zone_name
from
  aws_efs_mount_target;
select
  mount_target_id,
  file_system_id,
  life_cycle_state,
  availability_zone_id,
  availability_zone_name
from
  aws_efs_mount_target;

Get network details for each mount target

Explore the network configuration of each mount target to understand its association with different network interfaces, subnets, and virtual private clouds. This can help in assessing network-related issues and ensuring optimal configuration for enhanced performance.

select
  mount_target_id,
  network_interface_id,
  subnet_id,
  vpc_id
from
  aws_efs_mount_target;
select
  mount_target_id,
  network_interface_id,
  subnet_id,
  vpc_id
from
  aws_efs_mount_target;
title description
Steampipe Table: aws_eks_addon - Query AWS EKS Add-Ons using SQL
Allows users to query AWS EKS Add-Ons to retrieve information about add-ons associated with each Amazon EKS cluster.

Table: aws_eks_addon - Query AWS EKS Add-Ons using SQL

The AWS EKS Add-Ons are additional software components that enhance the functionality of your Amazon Elastic Kubernetes Service (EKS) clusters. They provide a way to deploy and manage Kubernetes applications, improve cluster security, and simplify cluster management. Using AWS EKS Add-Ons, you can automate time-consuming tasks such as patching, updating, and scaling.

Table Usage Guide

The aws_eks_addon table in Steampipe provides you with information about add-ons associated with each Amazon EKS cluster. This table allows you, as a DevOps engineer, to query add-on-specific details, including add-on versions, status, and associated metadata. You can utilize this table to gather insights on add-ons, such as the current version of each add-on, the health of add-ons, and more. The schema outlines the various attributes of the EKS add-on for you, including the add-on name, add-on version, service account role ARN, and associated tags.

Examples

Basic info

Explore the status of various add-ons within your AWS EKS clusters to understand their versions and associated roles. This can be beneficial for assessing the current configuration and ensuring the optimal functionality of your clusters.

select
  addon_name,
  arn,
  addon_version,
  cluster_name,
  status,
  service_account_role_arn
from
  aws_eks_addon;
select
  addon_name,
  arn,
  addon_version,
  cluster_name,
  status,
  service_account_role_arn
from
  aws_eks_addon;

List add-ons that are not active

Identify instances where certain add-ons in the AWS EKS service are not active. This can help in monitoring and managing resources effectively by pinpointing inactive add-ons that may need attention or removal.

select
  addon_name,
  arn,
  cluster_name,
  status
from
  aws_eks_addon
where
  status <> 'ACTIVE';
select
  addon_name,
  arn,
  cluster_name,
  status
from
  aws_eks_addon
where
  status != 'ACTIVE';

Get count of add-ons by cluster

Determine the total number of add-ons per cluster within your AWS EKS environment to better manage resources and understand utilization.

select
  cluster_name,
  count(addon_name) as addon_count
from
  aws_eks_addon
group by
  cluster_name;
select
  cluster_name,
  count(addon_name) as addon_count
from
  aws_eks_addon
group by
  cluster_name;
title description
Steampipe Table: aws_eks_addon_version - Query AWS EKS Add-On Versions using SQL
Allows users to query AWS EKS Add-On Versions.

Table: aws_eks_addon_version - Query AWS EKS Add-On Versions using SQL

The AWS EKS Add-On Versions are a part of the Amazon Elastic Kubernetes Service (EKS), which is a managed service that makes it easy for you to run Kubernetes on AWS without needing to install and operate your own Kubernetes control plane. Add-Ons help to automate the process of installing, upgrading, and operating additional Kubernetes software. They are versions of Kubernetes software components that can be installed onto your Amazon EKS clusters.

Table Usage Guide

The aws_eks_addon_version table in Steampipe provides you with information about Add-On versions within Amazon Elastic Kubernetes Service (EKS). This table allows you, as a DevOps engineer, to query add-on specific details, including addon name, addon version, architecture, and associated metadata. You can utilize this table to gather insights on add-ons, such as the add-on version status, the specific architectures it supports, and more. The schema outlines the various attributes of the EKS add-on for you, including the add-on name, add-on version, and supported architectures.

Examples

Basic info

Explore which addons are available for your AWS EKS service and identify their versions to ensure compatibility and optimal performance. This can be beneficial in maintaining an updated and efficient system.

select
  addon_name,
  addon_version,
  type
from
  aws_eks_addon_version;
select
  addon_name,
  addon_version,
  type
from
  aws_eks_addon_version;

Count the number of add-on versions by add-on

Determine the areas in which various versions of add-ons are being used within your AWS Elastic Kubernetes Service (EKS). This can help in understanding the distribution and usage of different add-on versions, aiding in effective management and potential upgrades.

select
  addon_name,
  count(addon_version) as addon_version_count
from
  aws_eks_addon_version
group by
  addon_name;
select
  addon_name,
  count(addon_version) as addon_version_count
from
  aws_eks_addon_version
group by
  addon_name;

Get configuration details of each add-on version

Explore the specific configuration details for each version of an add-on to understand how it's set up and functions. This can help to identify any potential issues or areas for improvement in your AWS EKS environment.

select
  addon_name,
  addon_version,
  addon_configuration -> '$defs' -> 'extraVolumeTags' ->> 'description' as addon_configuration_def_description,
  addon_configuration -> '$defs' -> 'extraVolumeTags' -> 'propertyNames' as addon_configuration_def_property_names,
  addon_configuration -> '$defs' -> 'extraVolumeTags' -> 'patternProperties' as addon_configuration_def_pattern_properties,
  addon_configuration -> 'properties' as addon_configuration_properties
from
  aws_eks_addon_version limit 10;
select
  addon_name,
  addon_version,
  json_extract(addon_configuration, '$.$defs.extraVolumeTags.description') as addon_configuration_def_description,
  json_extract(addon_configuration, '$.$defs.extraVolumeTags.propertyNames') as addon_configuration_def_property_names,
  json_extract(addon_configuration, '$.$defs.extraVolumeTags.patternProperties') as addon_configuration_def_pattern_properties,
  json_extract(addon_configuration, '$.properties') as addon_configuration_properties
from
  aws_eks_addon_version limit 10;
title description
Steampipe Table: aws_eks_cluster - Query AWS Elastic Kubernetes Service Cluster using SQL
Allows users to query AWS Elastic Kubernetes Service Cluster data, including cluster configurations, statuses, and associated metadata.

Table: aws_eks_cluster - Query AWS Elastic Kubernetes Service Cluster using SQL

The AWS Elastic Kubernetes Service (EKS) Cluster is a managed service that simplifies the deployment, management, and scaling of containerized applications using Kubernetes, an open-source system. EKS runs Kubernetes control plane instances across multiple AWS availability zones to ensure high availability, automatically detects and replaces unhealthy control plane instances, and provides on-demand, zero downtime upgrades and patching. It integrates with AWS services to provide scalability and security for your applications, including Elastic Load Balancing for load distribution, IAM for authentication, and Amazon VPC for isolation.

Table Usage Guide

The aws_eks_cluster table in Steampipe provides you with information about EKS clusters within AWS Elastic Kubernetes Service (EKS). This table enables you, as a DevOps engineer, to query cluster-specific details, including cluster name, status, endpoint, and associated metadata. You can utilize this table to gather insights on clusters, such as their current status, role ARN, VPC configurations, and more. The schema outlines the various attributes of the EKS cluster, including the cluster ARN, creation date, attached security groups, and associated tags for you.

Examples

Basic info

Determine the status and identity of your Amazon EKS clusters to assess their operational condition and identify any potential issues. This can help maintain optimal performance and security within your AWS environment.

select
  name,
  arn,
  endpoint,
  identity,
  status
from
  aws_eks_cluster;
select
  name,
  arn,
  endpoint,
  identity,
  status
from
  aws_eks_cluster;

Get the VPC configuration for each cluster

This query helps to assess the configuration of each cluster's Virtual Private Cloud (VPC) in an AWS EKS setup. It can be used to gain insights into the cluster's security group ID, endpoint access details, CIDR blocks for public access, associated security group IDs, subnet IDs, and the VPC ID, which can be crucial for managing network accessibility and security.

select
  name,
  resources_vpc_config ->> 'ClusterSecurityGroupId' as cluster_security_group_id,
  resources_vpc_config ->> 'EndpointPrivateAccess' as endpoint_private_access,
  resources_vpc_config ->> 'EndpointPublicAccess' as endpoint_public_access,
  resources_vpc_config ->> 'PublicAccessCidrs' as public_access_cidrs,
  resources_vpc_config ->> 'SecurityGroupIds' as security_group_ids,
  resources_vpc_config -> 'SubnetIds' as subnet_ids,
  resources_vpc_config ->> 'VpcId' as vpc_id
from
  aws_eks_cluster;
select
  name,
  json_extract(resources_vpc_config, '$.ClusterSecurityGroupId') as cluster_security_group_id,
  json_extract(resources_vpc_config, '$.EndpointPrivateAccess') as endpoint_private_access,
  json_extract(resources_vpc_config, '$.EndpointPublicAccess') as endpoint_public_access,
  json_extract(resources_vpc_config, '$.PublicAccessCidrs') as public_access_cidrs,
  json_extract(resources_vpc_config, '$.SecurityGroupIds') as security_group_ids,
  json_extract(resources_vpc_config, '$.SubnetIds') as subnet_ids,
  json_extract(resources_vpc_config, '$.VpcId') as vpc_id
from
  aws_eks_cluster;

List disabled log types for each cluster

Determine the areas in which log types are disabled for each cluster in AWS EKS service. This is useful for identifying potential gaps in your logging strategy, ensuring comprehensive coverage for effective monitoring and debugging.

select
  name,
  i ->> 'Enabled' as enabled,
  i ->> 'Types' as types
from
  aws_eks_cluster,
  jsonb_array_elements(logging -> 'ClusterLogging') as i
where
  i ->> 'Enabled' = 'false';
select
  name,
  json_extract(i.value, '$.Enabled') as enabled,
  json_extract(i.value, '$.Types') as types
from
  aws_eks_cluster,
  json_each(logging, 'ClusterLogging') as i
where
  json_extract(i.value, '$.Enabled') = 'false';

List clusters not running Kubernetes version 1.19

Identify those clusters within your AWS EKS environment that are not operating on Kubernetes version 1.19. This can be useful to ensure compliance with specific version requirements or to plan for necessary upgrades.

select
  name,
  arn,
  version
from
  aws_eks_cluster
where
  version <> '1.19';
select
  name,
  arn,
  version
from
  aws_eks_cluster
where
  version != '1.19';
title description
Steampipe Table: aws_eks_fargate_profile - Query AWS EKS Fargate Profiles using SQL
Allows users to query AWS EKS Fargate Profiles and retrieve data such as the Fargate profile name, ARN, status, and more.

Table: aws_eks_fargate_profile - Query AWS EKS Fargate Profiles using SQL

The AWS EKS Fargate Profile is a component of Amazon Elastic Kubernetes Service (EKS) that allows you to run Kubernetes pods on AWS Fargate. With Fargate, you can focus on designing and building your applications instead of managing the infrastructure that runs them. It eliminates the need for you to choose server types, decide when to scale your node groups, or optimize cluster packing.

Table Usage Guide

The aws_eks_fargate_profile table in Steampipe provides you with information about Fargate Profiles within Amazon Elastic Kubernetes Service (EKS). This table allows you as a DevOps engineer to query profile-specific details, including the profile name, ARN, status, and the EKS cluster to which it belongs. You can utilize this table to gather insights on Fargate profiles, such as profiles associated with a specific EKS cluster, the status of the profiles, and more. The schema outlines the various attributes of the EKS Fargate profile for you, including the profile name, ARN, status, EKS cluster, and associated tags.

Examples

Basic info

Determine the areas in which AWS EKS Fargate profiles are being utilized. This query can help you assess the status and creation date of these profiles, offering insights for resource management and optimization.

select
  fargate_profile_name,
  fargate_profile_arn,
  cluster_name,
  created_at,
  status,
  tags
from
  aws_eks_fargate_profile;
select
  fargate_profile_name,
  fargate_profile_arn,
  cluster_name,
  created_at,
  status,
  tags
from
  aws_eks_fargate_profile;

List fargate profiles which are inactive

Identify instances where AWS EKS Fargate profiles are not currently active. This can be useful for troubleshooting, maintenance, or resource optimization purposes.

select
  fargate_profile_name,
  fargate_profile_arn,
  cluster_name,
  created_at,
  status
from
  aws_eks_fargate_profile
where
  status <> 'ACTIVE';
select
  fargate_profile_name,
  fargate_profile_arn,
  cluster_name,
  created_at,
  status
from
  aws_eks_fargate_profile
where
  status != 'ACTIVE';

Get the subnet configuration for each fargate profile

Explore the configurations of various Fargate profiles within your EKS clusters to understand the availability and IP address count for each associated subnet. This can be beneficial to manage resources efficiently and ensure optimal performance of your applications.

select
  f.fargate_profile_name,
  f.cluster_name,
  f.status as fargate_profile_status,
  s.availability_zone,
  s.available_ip_address_count,
  s.cidr_block,
  s.vpc_id
from
  aws_eks_fargate_profile as f,
  aws_vpc_subnet as s,
  jsonb_array_elements(f.subnets) as subnet_id
where
  s.subnet_id = subnet_id;
select
  f.fargate_profile_name,
  f.cluster_name,
  f.status as fargate_profile_status,
  s.availability_zone,
  s.available_ip_address_count,
  s.cidr_block,
  s.vpc_id
from
  aws_eks_fargate_profile as f,
  aws_vpc_subnet as s
where
  s.subnet_id IN (select value from json_each(f.subnets));

List fargate profiles for clusters not running Kubernetes version greater than 1.19

Explore which Fargate profiles are associated with clusters not running Kubernetes version greater than 1.19. This can be beneficial in identifying outdated clusters, facilitating necessary upgrades to improve system performance and security.

select
  c.name as cluster_name,
  c.arn as cluster_arn,
  c.version as cluster_version,
  f.fargate_profile_name as fargate_profile_name,
  f.fargate_profile_arn as fargate_profile_arn,
  f.created_at as created_at,
  f.pod_execution_role_arn as pod_execution_role_arn,
  f.status as fargate_profile_status
from
  aws_eks_fargate_profile as f,
  aws_eks_cluster as c
where
  c.version::float > 1.19 and f.cluster_name = c.name;
Error: The corresponding SQLite query is unavailable.
title description
Steampipe Table: aws_eks_identity_provider_config - Query Amazon EKS Identity Provider Configurations using SQL
Allows users to query Amazon EKS Identity Provider Configurations for detailed information about the identity provider configurations for Amazon EKS clusters.

Table: aws_eks_identity_provider_config - Query Amazon EKS Identity Provider Configurations using SQL

The Amazon EKS Identity Provider Configurations is a feature of Amazon Elastic Kubernetes Service (EKS). It allows you to integrate and manage third-party identity providers for authentication with your EKS clusters. This ensures secure access and identity management for your Kubernetes workloads.

Table Usage Guide

The aws_eks_identity_provider_config table in Steampipe provides you with information about the identity provider configurations for Amazon EKS clusters. This table allows you, as a DevOps engineer, to query configuration-specific details, including the type of identity provider, client ID, issuer URL, and associated metadata. You can utilize this table to gather insights on configurations, such as the type of identity provider, the client ID, and the issuer URL. The schema outlines the various attributes of the identity provider configuration, including the cluster name, creation time, tags, and status for you.

Examples

Basic info

Explore which AWS EKS identity provider configurations are in use and their current status. This can help you manage and monitor your AWS EKS resources more effectively.

select
  name,
  arn,
  cluster_name,
  tags,
  status
from
  aws_eks_identity_provider_config;
select
  name,
  arn,
  cluster_name,
  tags,
  status
from
  aws_eks_identity_provider_config;

List OIDC type Identity provider config

Determine the areas in which OpenID Connect (OIDC) type identity provider configurations are used within your AWS Elastic Kubernetes Service (EKS) clusters. This is useful for understanding your security setup and ensuring that it aligns with your organization's policies.

select
  name,
  arn,
  cluster_name,
  type
from
  aws_eks_identity_provider_config
where 
  type = 'oidc';
select
  name,
  arn,
  cluster_name,
  type
from
  aws_eks_identity_provider_config
where 
  type = 'oidc';
title description
Steampipe Table: aws_eks_node_group - Query AWS EKS Node Group using SQL
Allows users to query AWS EKS Node Group data, providing information about each node group within an AWS Elastic Kubernetes Service (EKS) cluster.

Table: aws_eks_node_group - Query AWS EKS Node Group using SQL

The AWS EKS Node Group is a resource within Amazon Elastic Kubernetes Service (EKS). It represents a group of nodes within a cluster that all share the same configuration, making it easier to manage and scale your applications. Node groups are associated with a specific Amazon EKS cluster and can be customized according to your workload requirements.

Table Usage Guide

The aws_eks_node_group table in Steampipe provides you with information about each node group within an AWS Elastic Kubernetes Service (EKS) cluster. This table allows you, as a DevOps engineer, system administrator, or other technical professional, to query node-group-specific details, including the node group ARN, creation timestamp, health status, and associated metadata. You can utilize this table to gather insights on node groups, such as the status of each node, the instance types used, and more. The schema outlines the various attributes of the EKS node group for you, including the node role, subnets, scaling configuration, and associated tags.

Examples

Basic info

Explore the status and creation details of node groups within your Amazon EKS clusters. This allows you to track the health and longevity of your Kubernetes resources, aiding in efficient resource management.

select
  nodegroup_name,
  arn,
  created_at,
  cluster_name,
  status
from
  aws_eks_node_group;
select
  nodegroup_name,
  arn,
  created_at,
  cluster_name,
  status
from
  aws_eks_node_group;

List node groups that are not active

Identify instances where certain node groups within your AWS EKS service are not active. This can help in managing resources effectively by pinpointing potential areas of concern or underutilization.

select
  nodegroup_name,
  arn,
  created_at,
  cluster_name,
  status
from
  aws_eks_node_group
where
  status <> 'ACTIVE';
select
  nodegroup_name,
  arn,
  created_at,
  cluster_name,
  status
from
  aws_eks_node_group
where
  status != 'ACTIVE';

Get health status of the node groups

Assess the health status of various node groups within your AWS EKS clusters. This can help identify any potential issues or anomalies, ensuring optimal performance and stability of your Kubernetes workloads.

select
  nodegroup_name,
  cluster_name,
  jsonb_pretty(health) as health
from
  aws_eks_node_group;
select
  nodegroup_name,
  cluster_name,
  health
from
  aws_eks_node_group;

Get launch template details of the node groups

Determine the configuration details of node groups within a cluster to understand the settings and specifications of each node group.

select
  nodegroup_name,
  cluster_name,
  jsonb_pretty(launch_template) as launch_template
from
  aws_eks_node_group;
select
  nodegroup_name,
  cluster_name,
  launch_template
from
  aws_eks_node_group;
title description
Steampipe Table: aws_elastic_beanstalk_application - Query AWS Elastic Beanstalk Applications using SQL
Allows users to query AWS Elastic Beanstalk Applications to obtain details about their configurations, versions, environment, and other metadata.

Table: aws_elastic_beanstalk_application - Query AWS Elastic Beanstalk Applications using SQL

The AWS Elastic Beanstalk Application is a component of AWS's platform-as-a-service (PaaS) offering, Elastic Beanstalk. It allows developers to deploy and manage applications in multiple languages without worrying about the infrastructure that runs those applications. The Elastic Beanstalk Application handles capacity provisioning, load balancing, and automatic scaling, among other tasks, enabling developers to focus on their application code.

Table Usage Guide

The aws_elastic_beanstalk_application table in Steampipe provides you with information about applications within AWS Elastic Beanstalk. This table enables you, as a DevOps engineer, to query application-specific details, including application ARN, description, date created, date updated, and associated metadata. You can utilize this table to gather insights on applications, such as application versions, configurations, associated environments, and more. The schema outlines for you the various attributes of the Elastic Beanstalk application, including the resource lifecycles, configurations, and associated tags.

Examples

Basic info

Explore the applications in your AWS Elastic Beanstalk environment to understand their creation and update timeline, as well as the different versions available. This can help in managing the applications better by keeping track of their versions and update history.

select
  name,
  arn,
  description,
  date_created,
  date_updated,
  versions
from
  aws_elastic_beanstalk_application;
select
  name,
  arn,
  description,
  date_created,
  date_updated,
  versions
from
  aws_elastic_beanstalk_application;

Get resource life cycle configuration details for each application

Determine the life cycle configurations of your applications to understand the roles assigned and the rules set for version management. This can help in optimizing resource usage and maintaining application health.

select
  name,
  resource_lifecycle_config ->> 'ServiceRole' as role,
  resource_lifecycle_config -> 'VersionLifecycleConfig' ->> 'MaxAgeRule' as max_age_rule,
  resource_lifecycle_config -> 'VersionLifecycleConfig' ->> 'MaxCountRule' as max_count_rule
from
  aws_elastic_beanstalk_application;
select
  name,
  json_extract(resource_lifecycle_config, '$.ServiceRole') as role,
  json_extract(resource_lifecycle_config, '$.VersionLifecycleConfig.MaxAgeRule') as max_age_rule,
  json_extract(resource_lifecycle_config, '$.VersionLifecycleConfig.MaxCountRule') as max_count_rule
from
  aws_elastic_beanstalk_application;
title description
Steampipe Table: aws_elastic_beanstalk_application_version - Query AWS Elastic Beanstalk Application Versions using SQL
Allows users to query AWS Elastic Beanstalk Application Versions to obtain details about their configurations, environments, and other metadata.

Table: aws_elastic_beanstalk_application_version - Query AWS Elastic Beanstalk Application Versions using SQL

The AWS Elastic Beanstalk Application Version is a component of AWS's platform-as-a-service (PaaS) offering, Elastic Beanstalk. It allows developers to deploy and manage applications in multiple languages without worrying about the infrastructure that runs those applications. The Elastic Beanstalk Application Version handles capacity provisioning, load balancing, and automatic scaling, among other tasks, enabling developers to focus on their application code.

Table Usage Guide

The aws_elastic_beanstalk_application_version table in Steampipe provides you with information about application versions within AWS Elastic Beanstalk. This table enables you, as a DevOps engineer, to query application version-specific details, including application version ARN, description, date created, date updated, and associated metadata. You can utilize this table to gather insights on application versions, such as application version configurations, associated environments, and more. The schema outlines for you the various attributes of the Elastic Beanstalk application version, including the resource lifecycles, configurations, and associated tags.

Examples

Basic info

Explore the application versions in your AWS Elastic Beanstalk environment to understand their creation and update timeline, as well as the different configurations available. This can help in managing the application versions better by keeping track of their configurations and update history.

select
  application_name,
  application_version_arn,
  version_label,
  description,
  date_created,
  date_updated,
  source_bundle
from
  aws_elastic_beanstalk_application_version;
select
  application_name,
  application_version_arn,
  version_label,
  description,
  date_created,
  date_updated,
  source_bundle
from
  aws_elastic_beanstalk_application_version;

List the recently updated application versions

Identify the application versions that have been recently updated in your AWS Elastic Beanstalk environment. This can help in tracking the recent changes made to the application versions and understanding the impact of these changes on the environment.

select
  application_name,
  application_version_arn,
  version_label,
  date_updated
from
  aws_elastic_beanstalk_application_version
order by
  date_updated desc;
select
  application_name,
  application_version_arn,
  version_label,
  date_updated
from
  aws_elastic_beanstalk_application_version
order by
  date_updated desc;

List the application versions which are 'Processed'

Identify the application versions that are in the 'Processed' state in your AWS Elastic Beanstalk environment. This can help in understanding the status of the application versions and their readiness for deployment.

select
  application_name,
  application_version_arn,
  version_label,
  status
from
  aws_elastic_beanstalk_application_version
where
  status = 'Processed';
select
  application_name,
  application_version_arn,
  version_label,
  status
from
  aws_elastic_beanstalk_application_version
where
  status = 'Processed';

List the application versions of a specific application

Identify the application versions of a specific application in your AWS Elastic Beanstalk environment. This can help in understanding the different versions available for a specific application and their configurations.

select
  application_name,
  application_version_arn,
  version_label,
  description,
  date_created,
  date_updated,
  source_bundle
from
  aws_elastic_beanstalk_application_version
where
  application_name = 'my-application';
select
  application_name,
  application_version_arn,
  version_label,
  description,
  date_created,
  date_updated,
  source_bundle
from
  aws_elastic_beanstalk_application_version
where
  application_name = 'my-application';

List the application versions with specific tags

Identify the application versions with specific tags in your AWS Elastic Beanstalk environment. This can help in understanding the tags associated with the application versions and their metadata.

select
  application_name,
  application_version_arn,
  version_label,
  tags
from
  aws_elastic_beanstalk_application_version
where
  tags ->> 'Environment' = 'Production';
select
  application_name,
  application_version_arn,
  version_label,
  tags
from
  aws_elastic_beanstalk_application_version
where
  json_extract(tags, '$.Environment') = 'Production';

List the application versions where the source repository is stored in CodeCommit

Identify the application versions where the source repository is stored in AWS CodeCommit in your AWS Elastic Beanstalk environment. This can help in understanding the source repository of the application versions and their configurations.

select
  application_name,
  application_version_arn,
  version_label
from
  aws_elastic_beanstalk_application_version
where
  source_build_information ->> 'SourceRepository' = 'CodeCommit';
select
  application_name,
  application_version_arn,
  version_label
from
  aws_elastic_beanstalk_application_version
where
  json_extract(source_build_information, '$.SourceRepository') = 'CodeCommit';
title description
Steampipe Table: aws_elastic_beanstalk_environment - Query AWS Elastic Beanstalk Environments using SQL
Allows users to query AWS Elastic Beanstalk Environments to gain insights into their configuration, status, health, related applications, and other metadata.

Table: aws_elastic_beanstalk_environment - Query AWS Elastic Beanstalk Environments using SQL

The AWS Elastic Beanstalk Environment is a part of the AWS Elastic Beanstalk service that allows developers to deploy and manage applications in the AWS cloud without worrying about the infrastructure that runs those applications. This service automatically handles the capacity provisioning, load balancing, scaling, and application health monitoring. It supports applications developed in Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker.

Table Usage Guide

The aws_elastic_beanstalk_environment table in Steampipe provides you with information about environments within AWS Elastic Beanstalk. This table allows you as a DevOps engineer to query environment-specific details, including configuration settings, environment health, related applications, and associated metadata. You can utilize this table to gather insights on environments, such as environments with specific configurations, health status, associated applications, and more. The schema outlines the various attributes of the Elastic Beanstalk environment for you, including the environment name, ID, application name, status, health, and associated tags.

Examples

Basic info

Explore the configuration of your AWS Elastic Beanstalk environments to understand their applications and tiers. This is useful for reviewing the setup and organization of your cloud applications.

select
  environment_id,
  environment_name,
  application_name,
  arn,
  tier
from
  aws_elastic_beanstalk_environment;
select
  environment_id,
  environment_name,
  application_name,
  arn,
  tier
from
  aws_elastic_beanstalk_environment;

List environments which have configuration updates and application version deployments in progress

Identify instances where configuration updates and application version deployments are currently in progress. This can be useful in managing and tracking ongoing operations within your environment.

select
  environment_name,
  abortable_operation_in_progress
from
  aws_elastic_beanstalk_environment
where
  abortable_operation_in_progress = 'true';
select
  environment_name,
  abortable_operation_in_progress
from
  aws_elastic_beanstalk_environment
where
  abortable_operation_in_progress = 'true';

List unhealthy environments

Determine the areas in which AWS Elastic Beanstalk environments are unhealthy. This query is useful for identifying and addressing problematic environments to ensure optimal application performance.

select
  environment_name,
  application_name,
  environment_id,
  health
from
  aws_elastic_beanstalk_environment
where
  health = 'Red';
select
  environment_name,
  application_name,
  environment_id,
  health
from
  aws_elastic_beanstalk_environment
where
  health = 'Red';

List environments with health monitoring disabled

Identify instances where health monitoring has been suspended in certain environments to understand potential vulnerabilities and ensure optimal performance.

select
  environment_name,
  health_status
from
  aws_elastic_beanstalk_environment
where
  health_status = 'Suspended';
select
  environment_name,
  health_status
from
  aws_elastic_beanstalk_environment
where
  health_status = 'Suspended';

List managed actions for each environment

Identify the managed actions associated with each environment in the AWS Elastic Beanstalk service. This can help in monitoring the status and type of actions, providing insights for better management and optimization of your environments.

select
  environment_name,
  a ->> 'ActionDescription' as action_description,
  a ->> 'ActionId' as action_id,
  a ->> 'ActionType' as action_type,
  a ->> 'Status' as action_status,
  a ->> 'WindowStartTime' as action_window_start_time
from
  aws_elastic_beanstalk_environment,
  jsonb_array_elements(managed_actions) as a;
select
  environment_name,
  json_extract(a.value, '$.ActionDescription') as action_description,
  json_extract(a.value, '$.ActionId') as action_id,
  json_extract(a.value, '$.ActionType') as action_type,
  json_extract(a.value, '$.Status') as action_status,
  json_extract(a.value, '$.WindowStartTime') as action_window_start_time
from
  aws_elastic_beanstalk_environment,
  json_each(managed_actions) as a;

list the configuration settings for each environment

Determine the areas in which configuration settings for various environments are tracked and updated. This can be used to keep track of deployment status, platform details, and other critical factors in your AWS Elastic Beanstalk environments.

select
  environment_name,
  application_name,
  c ->> 'DateCreated' as date_created,
  c ->> 'DateUpdated' as date_updated,
  c ->> 'DeploymentStatus' as deployment_status,
  c ->> 'Description' as description,
  c -> 'OptionSettings' ->> 'Namespace' as option_settings_namespace,
  c -> 'OptionSettings' ->> 'OptionName' as option_name,
  c -> 'OptionSettings' ->> 'ResourceName' as option_resource_name,
  c -> 'OptionSettings' ->> 'Value' as option_value,
  c ->> 'PlatformArn' as platform_arn,
  c ->> 'SolutionStackName' as solution_stack_name,
  c ->> 'TemplateName' as template_name
from
  aws_elastic_beanstalk_environment,
  jsonb_array_elements(configuration_settings) as c;
select
  environment_name,
  application_name,
  json_extract(c.value, '$.DateCreated') as date_created,
  json_extract(c.value, '$.DateUpdated') as date_updated,
  json_extract(c.value, '$.DeploymentStatus') as deployment_status,
  json_extract(c.value, '$.Description') as description,
  json_extract(c.value, '$.OptionSettings.Namespace') as option_settings_namespace,
  json_extract(c.value, '$.OptionSettings.OptionName') as option_name,
  json_extract(c.value, '$.OptionSettings.ResourceName') as option_resource_name,
  json_extract(c.value, '$.OptionSettings.Value') as option_value,
  json_extract(c.value, '$.PlatformArn') as platform_arn,
  json_extract(c.value, '$.SolutionStackName') as solution_stack_name,
  json_extract(c.value, '$.TemplateName') as template_name
from
  aws_elastic_beanstalk_environment,
  json_each(configuration_settings) as c;
title description
Steampipe Table: aws_elasticache_cluster - Query Amazon ElastiCache Cluster using SQL
Allows users to query Amazon ElastiCache Cluster data, providing information about each ElastiCache Cluster within the AWS account.

Table: aws_elasticache_cluster - Query Amazon ElastiCache Cluster using SQL

The Amazon ElastiCache Cluster is a part of AWS's ElastiCache service that offers fully managed in-memory data store and cache services. This resource is designed to improve the performance of web applications by allowing you to retrieve information from fast, managed, in-memory caches, instead of relying solely on slower disk-based databases. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.

Table Usage Guide

The aws_elasticache_cluster table in Steampipe provides you with information about each ElastiCache Cluster within your AWS account. This table enables you, as a DevOps engineer, database administrator, or other IT professional, to query cluster-specific details, including configuration, status, and associated metadata. You can utilize this table to gather insights on clusters, such as their availability zones, cache node types, engine versions, and more. The schema outlines the various attributes of the ElastiCache Cluster for you, including the cluster ID, creation date, current status, and associated tags.

Examples

List clusters that are not encrypted at rest

Determine the areas in which data clusters are lacking proper encryption at rest. This is essential for identifying potential security vulnerabilities and ensuring data protection compliance.

select
  cache_cluster_id,
  cache_node_type,
  at_rest_encryption_enabled
from
  aws_elasticache_cluster
where
  not at_rest_encryption_enabled;
select
  cache_cluster_id,
  cache_node_type,
  at_rest_encryption_enabled
from
  aws_elasticache_cluster
where
  at_rest_encryption_enabled = 0;

List clusters whose availability zone count is less than 2

Determine the areas in which your AWS ElastiCache clusters are potentially vulnerable due to having less than two availability zones. This could be useful for improving disaster recovery strategies and ensuring high availability.

select
  cache_cluster_id,
  preferred_availability_zone
from
  aws_elasticache_cluster
where
  preferred_availability_zone <> 'Multiple';
select
  cache_cluster_id,
  preferred_availability_zone
from
  aws_elasticache_cluster
where
  preferred_availability_zone <> 'Multiple';

List clusters that do not enforce encryption in transit

Determine the areas in your system where encryption in transit is not enforced. This is useful for identifying potential security risks and ensuring that all data is properly protected during transmission.

select
  cache_cluster_id,
  cache_node_type,
  transit_encryption_enabled
from
  aws_elasticache_cluster
where
  not transit_encryption_enabled;
select
  cache_cluster_id,
  cache_node_type,
  transit_encryption_enabled
from
  aws_elasticache_cluster
where
  transit_encryption_enabled = 0;

List clusters provisioned with undesired (for example, cache.m5.large and cache.m4.4xlarge are desired) node types

Identify instances where clusters have been provisioned with undesired node types, enabling you to streamline your resources and align with your preferred configurations. This is particularly useful for maintaining consistency and optimizing performance across your infrastructure.

select
  cache_node_type,
  count(*) as count
from
  aws_elasticache_cluster
where
  cache_node_type not in ('cache.m5.large', 'cache.m4.4xlarge')
group by
  cache_node_type;
select
  cache_node_type,
  count(*) as count
from
  aws_elasticache_cluster
where
  cache_node_type not in ('cache.m5.large', 'cache.m4.4xlarge')
group by
  cache_node_type;

List clusters with inactive notification configuration topics

Determine the areas in which clusters have inactive notification configurations to assess the elements within your system that may not be receiving important updates or alerts.

select
  cache_cluster_id,
  cache_cluster_status,
  notification_configuration ->> 'TopicArn' as topic_arn,
  notification_configuration ->> 'TopicStatus' as topic_status
from
  aws_elasticache_cluster
where
  notification_configuration ->> 'TopicStatus' = 'inactive';
select
  cache_cluster_id,
  cache_cluster_status,
  json_extract(notification_configuration, '$.TopicArn') as topic_arn,
  json_extract(notification_configuration, '$.TopicStatus') as topic_status
from
  aws_elasticache_cluster
where
  json_extract(notification_configuration, '$.TopicStatus') = 'inactive';

Get security group details for each cluster

Determine the security status of each cluster by examining the associated security group details. This can help in evaluating the security posture of your clusters and identifying any potential vulnerabilities.

select
  cache_cluster_id,
  sg ->> 'SecurityGroupId' as security_group_id,
  sg ->> 'Status' as status
from
  aws_elasticache_cluster,
  jsonb_array_elements(security_groups) as sg;
select
  cache_cluster_id,
  json_extract(sg.value, '$.SecurityGroupId') as security_group_id,
  json_extract(sg.value, '$.Status') as status
from
  aws_elasticache_cluster,
  json_each(security_groups) as sg;

List clusters with automatic backup disabled

Determine the areas in which automatic backups are disabled for your clusters. This is useful for ensuring data safety and minimizing the risk of data loss.

select
  cache_cluster_id,
  cache_node_type,
  cache_cluster_status,
  snapshot_retention_limit
from
  aws_elasticache_cluster
where
  snapshot_retention_limit is null;
select
  cache_cluster_id,
  cache_node_type,
  cache_cluster_status,
  snapshot_retention_limit
from
  aws_elasticache_cluster
where
  snapshot_retention_limit is null;
title description
Steampipe Table: aws_elasticache_parameter_group - Query AWS Elasticache Parameter Groups using SQL
Allows users to query AWS Elasticache Parameter Groups, providing detailed information about each group's configurations, parameters, and associated metadata.

Table: aws_elasticache_parameter_group - Query AWS Elasticache Parameter Groups using SQL

The AWS ElastiCache Parameter Group is a feature of Amazon ElastiCache that allows you to manage the runtime settings for your ElastiCache instances. These groups enable you to apply identical configurations to multiple instances, enhancing the ease of setup and consistency across your cache environment. This resource is useful in both Memcached and Redis cache engines, providing control over cache security, memory usage, and other operational parameters.

Table Usage Guide

The aws_elasticache_parameter_group table in Steampipe provides you with information about Parameter Groups within AWS Elasticache. This table allows you, as a DevOps engineer, database administrator, or other technical professional, to query group-specific details, including associated parameters, parameter values, and descriptions. You can utilize this table to gather insights on parameter groups, such as their configurations, default system parameters, and user-defined parameters. The schema outlines the various attributes of the Parameter Group for you, including the group name, family, description, and associated parameters.

Examples

Basic info

Explore the characteristics of your AWS ElastiCache parameter groups to understand their configurations and global status. This can be useful in managing and optimizing your cache environments within AWS.

select
  cache_parameter_group_name,
  description,
  cache_parameter_group_family,
  description,
  is_global
from
  aws_elasticache_parameter_group;
select
  cache_parameter_group_name,
  description,
  cache_parameter_group_family,
  description,
  is_global
from
  aws_elasticache_parameter_group;

List parameter groups that are not compatible with redis 5.0 and memcached 1.5

Determine the areas in which parameter groups are incompatible with specific versions of Redis and Memcached. This can be useful to identify potential upgrade paths or to troubleshoot issues related to mismatched software versions.

select
  cache_parameter_group_family,
  count(*) as count
from
  aws_elasticache_parameter_group
where
  cache_parameter_group_family not in ('redis5.0', 'memcached1.5')
group by
  cache_parameter_group_family;
select
  cache_parameter_group_family,
  count(*) as count
from
  aws_elasticache_parameter_group
where
  cache_parameter_group_family not in ('redis5.0', 'memcached1.5')
group by
  cache_parameter_group_family;
title description
Steampipe Table: aws_elasticache_redis_metric_cache_hits_hourly - Query Amazon ElastiCache Redis Cache Hits using SQL
Allows users to query Amazon ElastiCache Redis Cache Hits on an hourly basis.

Table: aws_elasticache_redis_metric_cache_hits_hourly - Query Amazon ElastiCache Redis Cache Hits using SQL

The Amazon ElastiCache Redis is a web service that makes it easy to set up, manage, and scale a distributed in-memory data store or cache environment in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with managing a distributed cache environment. The 'Cache Hits' metric specifically provides the number of successful read-only key lookups in the main dictionary on an hourly basis.

Table Usage Guide

The aws_elasticache_redis_metric_cache_hits_hourly table in Steampipe provides you with information about the cache hits metrics of Amazon ElastiCache Redis instances on an hourly basis. This table allows you as a system administrator or a DevOps engineer to monitor and analyze the performance of Redis cache nodes by querying the cache hits metrics. You can utilize this table to gather insights on cache hits, such as the number of successful lookup of keys in the cache, and to understand the efficiency of your cache configurations. The schema outlines the various attributes of the cache hits metrics for you, including the timestamp, cache hits, dimensions, and more.

The aws_elasticache_redis_metric_cache_hits_hourly table provides you with metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Determine the efficiency of your AWS ElastiCache Redis instances by analyzing cache hit metrics over time. This can help optimize performance and resource utilization.

select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_elasticache_redis_metric_cache_hits_hourly
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_elasticache_redis_metric_cache_hits_hourly
order by
  cache_cluster_id,
  timestamp;

CacheHit sum below 10

The query is used to monitor the performance of your AWS ElastiCache Redis clusters by identifying instances where the sum of cache hits falls below 10 in an hour. This can help you pinpoint potential issues and optimize your cache usage for improved application performance.

select
  cache_cluster_id,
  timestamp,
  round(sum::numeric,2) as sum_cachehits,
  round(average::numeric,2) as average_cachehits,
  sample_count
from
  aws_elasticache_redis_metric_cache_hits_hourly
where sum < 10
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  round(sum,2) as sum_cachehits,
  round(average,2) as average_cachehits,
  sample_count
from
  aws_elasticache_redis_metric_cache_hits_hourly
where sum < 10
order by
  cache_cluster_id,
  timestamp;

CacheHit hourly average < 100

Explore the performance of your AWS ElastiCache Redis clusters by identifying instances where the hourly average of cache hits is less than 100. This can help pinpoint potential areas of concern and optimize the usage of your cache clusters.

select
  cache_cluster_id,
  timestamp,
  round(minimum::numeric,2) as min_cachehits,
  round(maximum::numeric,2) as max_cachehits,
  round(average::numeric,2) as avg_cachehits,
  sample_count
from
  aws_elasticache_redis_metric_cache_hits_hourly
where average < 100
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  round(minimum,2) as min_cachehits,
  round(maximum,2) as max_cachehits,
  round(average,2) as avg_cachehits,
  sample_count
from
  aws_elasticache_redis_metric_cache_hits_hourly
where average < 100
order by
  cache_cluster_id,
  timestamp;
title description
Steampipe Table: aws_elasticache_redis_metric_curr_connections_hourly - Query AWS ElastiCache Redis using SQL
Allows users to query ElastiCache Redis current connections metrics on an hourly basis.

Table: aws_elasticache_redis_metric_curr_connections_hourly - Query AWS ElastiCache Redis using SQL

The AWS ElastiCache Redis service provides a fully managed in-memory data store, compatible with Redis or Memcached. It improves the performance of web applications by retrieving data from fast, managed, in-memory data stores, instead of relying on slower disk-based databases. ElastiCache Redis supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams.

Table Usage Guide

The aws_elasticache_redis_metric_curr_connections_hourly table in Steampipe provides you with information about the hourly current connections metrics of ElastiCache Redis within AWS. This table allows you, as a DevOps engineer, database administrator, or other technical professional, to query the current number of client connections, excluding connections from read replicas, to a Redis instance. You can utilize this table to monitor usage patterns, detect possible connection leaks, and optimize resource allocation based on connection demands. The schema outlines the various attributes of the ElastiCache Redis current connections metrics for you, including the timestamp, average, maximum, minimum, and sample count.

The aws_elasticache_redis_metric_curr_connections_hourly table provides you with metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Explore which AWS ElastiCache Redis clusters have the most connections over time. This information can help you understand the load on your clusters and identify any unusual spikes in connections that could indicate a problem.

select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_elasticache_redis_metric_curr_connections_hourly
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sum,
  sample_count
from
  aws_elasticache_redis_metric_curr_connections_hourly
order by
  cache_cluster_id,
  timestamp;

currconnections Over 100 average

Explore the performance of your AWS ElastiCache Redis clusters by identifying instances where the average number of connections exceeds 100 in an hour. This can help in understanding the load on your clusters and take necessary actions if they are consistently over-utilized.

select
  cache_cluster_id,
  timestamp,
  round(minimum::numeric,2) as min_currconnections,
  round(maximum::numeric,2) as max_currconnections,
  round(average::numeric,2) as avg_currconnections,
  sample_count
from
  aws_elasticache_redis_metric_curr_connections_hourly
where average > 100
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  round(minimum,2) as min_currconnections,
  round(maximum,2) as max_currconnections,
  round(average,2) as avg_currconnections,
  sample_count
from
  aws_elasticache_redis_metric_curr_connections_hourly
where average > 100
order by
  cache_cluster_id,
  timestamp;
title description
Steampipe Table: aws_elasticache_redis_metric_engine_cpu_utilization_daily - Query AWS ElastiCache Redis Metrics using SQL
Allows users to query ElastiCache Redis Metrics and provides daily statistics for Engine CPU Utilization.

Table: aws_elasticache_redis_metric_engine_cpu_utilization_daily - Query AWS ElastiCache Redis Metrics using SQL

The AWS ElastiCache Redis Metrics service is a tool that allows you to collect, track, and analyze performance metrics for your running ElastiCache instances. It provides valuable information about CPU utilization, helping you understand how your applications are using your cache and where bottlenecks are occurring. This data can help you make informed decisions about scaling and optimizing your ElastiCache instances for better application performance.

Table Usage Guide

The aws_elasticache_redis_metric_engine_cpu_utilization_daily table in Steampipe provides you with daily statistical data about the CPU utilization of an Amazon ElastiCache Redis engine. This table allows you, as a DevOps engineer or data analyst, to query and analyze the CPU usage patterns of your ElastiCache Redis instances. This enables you to identify potential performance bottlenecks and optimize resource allocation. The schema outlines the various attributes of the CPU utilization metrics for you, including the timestamp, minimum, maximum, and average CPU usage, as well as the standard deviation.

The aws_elasticache_redis_metric_engine_cpu_utilization_daily table provides you with metric statistics at 24-hour intervals for the last year.

Examples

Basic info

Analyze the daily CPU utilization of AWS ElastiCache Redis clusters to understand their performance trends and capacity planning. This allows you to identify instances where resource usage may be high and adjust accordingly to ensure optimal functioning.

select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_daily
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_daily
order by
  cache_cluster_id,
  timestamp;

CPU Over 80% average

Determine the areas in which your AWS ElastiCache Redis instances are utilizing more than 80% of the CPU on average. This allows you to identify potential performance issues and optimize resource allocation.

select
  cache_cluster_id,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_daily
where average > 80
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_daily
where average > 80
order by
  cache_cluster_id,
  timestamp;

CPU daily average < 2%

Identify instances where the daily average CPU utilization is less than 2% in your AWS ElastiCache Redis clusters. This is useful in understanding underutilized resources, which can help optimize costs and resource allocation.

select
  cache_cluster_id,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_daily
where average < 2
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_daily
where average < 2
order by
  cache_cluster_id,
  timestamp;
title description
Steampipe Table: aws_elasticache_redis_metric_engine_cpu_utilization_hourly - Query AWS ElastiCache Redis using SQL
Allows users to query hourly CPU utilization metrics for AWS ElastiCache Redis.

Table: aws_elasticache_redis_metric_engine_cpu_utilization_hourly - Query AWS ElastiCache Redis using SQL

The AWS ElastiCache Redis is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud. It provides a high-performance, scalable, and cost-effective caching solution, while removing the complexity associated with managing a distributed cache environment. This service is primarily used to improve the performance of web applications by retrieving information from fast, managed, in-memory caches, instead of relying entirely on slower disk-based databases.

Table Usage Guide

The aws_elasticache_redis_metric_engine_cpu_utilization_hourly table in Steampipe gives you information about the hourly CPU utilization metrics for AWS ElastiCache Redis. This table enables you, as a DevOps engineer, database administrator, or other technical professional, to query time-series data related to CPU usage. As a result, you can monitor performance, identify potential bottlenecks, and optimize resource allocation. The schema outlines various attributes of the CPU utilization metrics for you, including the timestamp, average, maximum, and minimum CPU utilization, among others.

The aws_elasticache_redis_metric_engine_cpu_utilization_hourly table provides you with metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Explore the performance of your AWS ElastiCache Redis instances by analyzing CPU utilization over time. This can help optimize resource allocation and identify instances where performance tuning may be required.

select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_hourly
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_hourly
order by
  cache_cluster_id,
  timestamp;

CPU Over 80% average

Discover instances where your AWS ElastiCache Redis clusters are experiencing high CPU usage, specifically over 80% on average. This can help identify potential performance issues and allow for proactive troubleshooting.

select
  cache_cluster_id,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_hourly
where average > 80
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_hourly
where average > 80
order by
  cache_cluster_id,
  timestamp;

CPU hourly average < 2%

Analyze the performance of your AWS ElastiCache Redis clusters by identifying instances where the average CPU usage is less than 2% on an hourly basis. This can help pinpoint potential inefficiencies or underutilized resources, optimizing your cloud infrastructure management and cost efficiency.

select
  cache_cluster_id,
  timestamp,
  round(minimum::numeric,2) as min_cpu,
  round(maximum::numeric,2) as max_cpu,
  round(average::numeric,2) as avg_cpu,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_hourly
where average < 2
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  round(minimum,2) as min_cpu,
  round(maximum,2) as max_cpu,
  round(average,2) as avg_cpu,
  sample_count
from
  aws_elasticache_redis_metric_engine_cpu_utilization_hourly
where average < 2
order by
  cache_cluster_id,
  timestamp;
title description
Steampipe Table: aws_elasticache_redis_metric_get_type_cmds_hourly - Query AWS ElastiCache Redis Metrics using SQL
Allows users to query ElastiCache Redis Metrics on an hourly basis. This includes information on GET type commands executed in the selected ElastiCache Redis cluster during the last hour.

Table: aws_elasticache_redis_metric_get_type_cmds_hourly - Query AWS ElastiCache Redis Metrics using SQL

The AWS ElastiCache Redis Metrics service provides valuable insights into the performance of your Redis data stores. It allows you to monitor key performance metrics, including the number of 'get type' commands executed per hour. These metrics can help you optimize the performance and efficiency of your Redis data stores.

Table Usage Guide

The aws_elasticache_redis_metric_get_type_cmds_hourly table in Steampipe provides you with information about the GET type commands executed in your selected AWS ElastiCache Redis cluster during the last hour. This table allows you, whether you're a DevOps engineer, database administrator, or other IT professional, to query and analyze the hourly GET type command metrics. This gives you insights into the performance and usage patterns of your ElastiCache Redis clusters. The schema outlines the various attributes of the ElastiCache Redis Metrics for you, including the average, maximum, minimum, sample count, and sum of GET type commands.

Your aws_elasticache_redis_metric_get_type_cmds_hourly table provides metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Analyze the performance of your AWS ElastiCache Redis clusters over time to ensure optimal resource utilization and response times. This practical application allows you to monitor and manage your clusters effectively, leading to improved performance and cost efficiency.

select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_elasticache_redis_metric_get_type_cmds_hourly
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count
from
  aws_elasticache_redis_metric_get_type_cmds_hourly
order by
  cache_cluster_id,
  timestamp;

gettypecmds sum 0ver 100

Explore the performance of your AWS ElastiCache Redis clusters by identifying instances where the sum of 'get type' commands exceeds 100 in an hour. This can help in understanding usage patterns and planning for capacity upgrades or optimizations.

select
  cache_cluster_id,
  timestamp,
  round(minimum::numeric,2) as min_gettypecmds,
  round(maximum::numeric,2) as max_gettypecmds,
  round(average::numeric,2) as avg_gettypecmds,
  round(sum::numeric,2) as sum_gettypecmds
from
  aws_elasticache_redis_metric_get_type_cmds_hourly
where sum > 100
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  round(minimum,2) as min_gettypecmds,
  round(maximum,2) as max_gettypecmds,
  round(average,2) as avg_gettypecmds,
  round(sum,2) as sum_gettypecmds
from
  aws_elasticache_redis_metric_get_type_cmds_hourly
where sum > 100
order by
  cache_cluster_id,
  timestamp;
title description
Steampipe Table: aws_elasticache_redis_metric_list_based_cmds_hourly - Query AWS ElastiCache Redis Metrics using SQL
Allows users to query ElastiCache Redis Metrics on an hourly basis, providing data on list-based commands executed in the ElastiCache Redis environment.

Table: aws_elasticache_redis_metric_list_based_cmds_hourly - Query AWS ElastiCache Redis Metrics using SQL

The AWS ElastiCache Redis Metrics service allows you to monitor, isolate, and diagnose performance issues in your ElastiCache Redis environments using SQL. It provides important insights into the operational health of your ElastiCache Redis instances by collecting and analyzing key database performance metrics. This service enables efficient troubleshooting and performance optimization of your ElastiCache Redis environments.

Table Usage Guide

The aws_elasticache_redis_metric_list_based_cmds_hourly table in Steampipe provides you with information about list-based command metrics within AWS ElastiCache Redis. This table allows you, as a DevOps engineer, to query command-specific details on an hourly basis, including the number of commands processed, the latency of commands, and associated metadata. You can utilize this table to gather insights on command performance, such as identifying high latency commands, tracking the frequency of command usage, and more. The schema outlines the various attributes of the ElastiCache Redis command metrics, including the cache cluster id, the metric name, and the timestamp.

The aws_elasticache_redis_metric_list_based_cmds_hourly table provides you with metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Determine the performance trends of your ElastiCache Redis clusters by analyzing hourly metrics. This can help in identifying patterns, optimizing resource usage and planning for capacity upgrades.

select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count,
  sum
from
  aws_elasticache_redis_metric_list_based_cmds_hourly
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average,
  sample_count,
  sum
from
  aws_elasticache_redis_metric_list_based_cmds_hourly
order by
  cache_cluster_id,
  timestamp;

listbasedcmds sum over 100

This query is useful for monitoring your AWS ElastiCache Redis clusters by identifying instances where the sum of list-based commands executed per hour exceeds 100. This can help in optimizing your cache usage by pinpointing areas of high command activity.

select
  cache_cluster_id,
  timestamp,
  round(minimum::numeric,2) as min_listbasedcmds,
  round(maximum::numeric,2) as max_listbasedcmds,
  round(average::numeric,2) as avg_listbasedcmds,
  round(sum::numeric,2) as sum_listbasedcmds
from
  aws_elasticache_redis_metric_list_based_cmds_hourly
where sum > 100
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  round(minimum,2) as min_listbasedcmds,
  round(maximum,2) as max_listbasedcmds,
  round(average,2) as avg_listbasedcmds,
  round(sum,2) as sum_listbasedcmds
from
  aws_elasticache_redis_metric_list_based_cmds_hourly
where sum > 100
order by
  cache_cluster_id,
  timestamp;
title description
Steampipe Table: aws_elasticache_redis_metric_new_connections_hourly - Query AWS ElastiCache Redis Metrics using SQL
Allows users to query AWS ElastiCache Redis Metrics to get hourly data on new connections.

Table: aws_elasticache_redis_metric_new_connections_hourly - Query AWS ElastiCache Redis Metrics using SQL

The AWS ElastiCache Redis Metrics provides a robust monitoring solution for your applications. It allows you to collect, view, and analyze metrics for your ElastiCache Redis instances through SQL queries. The 'new_connections_hourly' metric specifically measures the number of new connections made to the Redis server per hour, aiding in capacity planning and performance tuning.

Table Usage Guide

The aws_elasticache_redis_metric_new_connections_hourly table in Steampipe provides you with information about AWS ElastiCache Redis Metrics. This table allows you, as a DevOps engineer or system administrator, to query hourly data about new connections to your AWS ElastiCache Redis instances. You can utilize this table to monitor connection trends, analyze system performance, and identify potential issues. The schema outlines the various attributes of the metrics, including the cache node ID, timestamp, maximum number of connections, and more.

The aws_elasticache_redis_metric_new_connections_hourly table provides you with metric statistics at 1 hour intervals for the most recent 60 days.

Examples

Basic info

Determine the areas in which AWS ElastiCache Redis clusters have experienced new connections over time. This can help in understanding usage patterns and identifying potential periods of high demand or unusual activity.

select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average
from
  aws_elasticache_redis_metric_new_connections_hourly
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  minimum,
  maximum,
  average
from
  aws_elasticache_redis_metric_new_connections_hourly
order by
  cache_cluster_id,
  timestamp;

newconnections sum over 10

This query is useful for identifying instances where the total number of new connections to your AWS ElastiCache Redis clusters exceeds 10 within an hour. It allows you to monitor and manage your connection usage, helping to ensure optimal performance and avoid potential overloads.

select
  cache_cluster_id,
  timestamp,
  round(minimum::numeric,2) as min_newconnections,
  round(maximum::numeric,2) as max_newconnections,
  round(average::numeric,2) as avg_newconnections,
  round(sum::numeric,2) as sum_newconnections
from
  aws_elasticache_redis_metric_new_connections_hourly
where sum > 10
order by
  cache_cluster_id,
  timestamp;
select
  cache_cluster_id,
  timestamp,
  round(minimum,2) as min_newconnections,
  round(maximum,2) as max_newconnections,
  round(average,2) as avg_newconnections,
  round(sum,2) as sum_newconnections
from
  aws_elasticache_redis_metric_new_connections_hourly
where sum > 10
order by
  cache_cluster_id,
  timestamp;
title description
Steampipe Table: aws_elasticache_replication_group - Query AWS ElastiCache Replication Groups using SQL
Allows users to query AWS ElastiCache Replication Groups to retrieve information related to their configuration, status, and associated resources.

Table: aws_elasticache_replication_group - Query AWS ElastiCache Replication Groups using SQL

The AWS ElastiCache Replication Group is a feature of AWS ElastiCache that allows you to create a group of one or more cache clusters that are managed as a single entity. This enables the automatic partitioning of your data across multiple shards, providing enhanced performance, reliability, and scalability. Replication groups also support automatic failover, providing a high level of data availability.

Table Usage Guide

The aws_elasticache_replication_group table in Steampipe provides you with information about replication groups within AWS ElastiCache. This table allows you, as a DevOps engineer, to query group-specific details, including configuration, status, and associated resources. You can utilize this table to gather insights on replication groups, such as their current status, associated cache clusters, node types, and more. The schema outlines the various attributes of the replication group for you, including the replication group ID, status, description, and associated tags.

Examples

Basic info

Determine the areas in which automatic failover is enabled in AWS ElastiCache, as well as whether authentication tokens are being used, to enhance security and ensure data redundancy. This query helps in identifying potential vulnerabilities and improving disaster recovery strategies.

select
  replication_group_id,
  description,
  cache_node_type,
  cluster_enabled,
  auth_token_enabled,
  automatic_failover
from
  aws_elasticache_replication_group;
select
  replication_group_id,
  description,
  cache_node_type,
  cluster_enabled,
  auth_token_enabled,
  automatic_failover
from
  aws_elasticache_replication_group;

List replication groups that are not encrypted at rest

Identify instances where replication groups in AWS ElastiCache are not encrypted at rest. This is useful to ensure data security by pinpointing potential vulnerabilities.

select
  replication_group_id,
  cache_node_type,
  at_rest_encryption_enabled
from
  aws_elasticache_replication_group
where
  not at_rest_encryption_enabled;
select
  replication_group_id,
  cache_node_type,
  at_rest_encryption_enabled
from
  aws_elasticache_replication_group
where
  at_rest_encryption_enabled = 0;

List replication groups with multi-AZ disabled

Determine the areas in which replication groups have multi-AZ disabled to assess potential vulnerabilities in your AWS ElastiCache setup.

select
  replication_group_id,
  cache_node_type,
  multi_az
from
  aws_elasticache_replication_group
where
  multi_az = 'disabled';
select
  replication_group_id,
  cache_node_type,
  multi_az
from
  aws_elasticache_replication_group
where
  multi_az = 'disabled';

List replication groups whose backup retention period is less than 30 days

Determine the areas in which backup retention periods for replication groups fall short of a 30-day standard, allowing for timely adjustments to ensure data safety.

select
  replication_group_id,
  snapshot_retention_limit,
  snapshot_window,
  snapshotting_cluster_id
from
  aws_elasticache_replication_group
where
  snapshot_retention_limit < 30;
select
  replication_group_id,
  snapshot_retention_limit,
  snapshot_window,
  snapshotting_cluster_id
from
  aws_elasticache_replication_group
where
  snapshot_retention_limit < 30;

List replication groups by node type

Explore which node types are used in your replication groups and determine their frequency. This can help optimize resource allocation and improve system performance.

select
  cache_node_type,
  count (*)
from
  aws_elasticache_replication_group
group by
  cache_node_type;
select
  cache_node_type,
  count (*)
from
  aws_elasticache_replication_group
group by
  cache_node_type;

List member clusters for each replication group

Explore the relationships within your replication groups by identifying which member clusters belong to each group. This helps in understanding the distribution and organization of your data across different clusters.

select
  replication_group_id,
  jsonb_array_elements_text(member_clusters) as member_clusters
from
  aws_elasticache_replication_group;
select
  replication_group_id,
  json_each.value as member_clusters
from
  aws_elasticache_replication_group,
  json_each(aws_elasticache_replication_group.member_clusters);
title description
Steampipe Table: aws_elasticache_reserved_cache_node - Query AWS ElastiCache Reserved Cache Nodes using SQL
Allows users to query AWS ElastiCache Reserved Cache Nodes to gather details such as the reservation status, start time, duration, and associated metadata.

Table: aws_elasticache_reserved_cache_node - Query AWS ElastiCache Reserved Cache Nodes using SQL

AWS ElastiCache Reserved Cache Nodes are a type of node that you can purchase for a one-time, upfront payment in order to reserve capacity for future use. These nodes provide you with a significant discount compared to standard on-demand cache node pricing. They are ideal for applications with steady-state or predictable usage and can be used in any available AWS region.

Table Usage Guide

The aws_elasticache_reserved_cache_node table in Steampipe provides you with information about the reserved cache nodes within AWS ElastiCache. This table allows you, as a DevOps engineer, to query reserved cache node-specific details, including the reservation status, start time, and duration. You can utilize this table to gather insights on reserved cache nodes, such as their current status, the time at which the reservation started, the duration of the reservation, and more. The schema outlines the various attributes of the reserved cache node for you, including the reserved cache node ID, cache node type, start time, duration, fixed price, usage price, cache node count, product description, offering type, state, recurring charges, and associated tags.

Examples

Basic info

Explore which AWS ElastiCache reserved nodes are currently active, and gain insights into their type and associated offering IDs. This can help in managing resources and planning for future capacity needs.

select
  reserved_cache_node_id,
  arn,
  reserved_cache_nodes_offering_id,
  state,
  cache_node_type
from
  aws_elasticache_reserved_cache_node;
select
  reserved_cache_node_id,
  arn,
  reserved_cache_nodes_offering_id,
  state,
  cache_node_type
from
  aws_elasticache_reserved_cache_node;

List reserved cache nodes with offering type All Upfront

Identify the reserved cache nodes that have been fully paid for upfront. This can help to manage costs and understand the financial commitment made for these resources.

select
  reserved_cache_node_id,
  arn,
  reserved_cache_nodes_offering_id,
  state,
  cache_node_type
from
  aws_elasticache_reserved_cache_node
where
  offering_type = 'All Upfront';
select
  reserved_cache_node_id,
  arn,
  reserved_cache_nodes_offering_id,
  state,
  cache_node_type
from
  aws_elasticache_reserved_cache_node
where
  offering_type = 'All Upfront';

List reserved cache nodes order by duration

Determine the areas in which cache nodes are reserved for the longest duration. This can help prioritize which nodes to investigate for potential cost savings or performance improvements.

select
  reserved_cache_node_id,
  arn,
  reserved_cache_nodes_offering_id,
  state,
  cache_node_type
from
  aws_elasticache_reserved_cache_node
order by
  duration desc;
select
  reserved_cache_node_id,
  arn,
  reserved_cache_nodes_offering_id,
  state,
  cache_node_type
from
  aws_elasticache_reserved_cache_node
order by
  duration desc;

List reserved cache nodes order by usage price

Identify the reserved cache nodes within your AWS ElastiCache service, organized by their usage price. This can help prioritize cost management efforts by highlighting the most expen

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

View raw

(Sorry about that, but we can’t show files that are this big right now.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment