Skip to content

Instantly share code, notes, and snippets.

View seeker815's full-sized avatar

Sai Kothapalle seeker815

  • self
  • Bangalore, India
View GitHub Profile
To ensure that access to `AWSCloudShellFullAccess` is restricted using Terraform, you can create IAM policies with the minimum necessary permissions and attach them to specific IAM users or groups. In this example, we will create an IAM policy that allows only essential AWS CLI actions, denying `AWSCloudShellFullAccess`.
First, make sure you have the latest AWS provider version installed. You can check your current version by running:
```hcl
terraform init -list-providers
```
Next, update your Terraform configuration file (e.g., `main.tf`) with the following code:
To ensure that the usage of the 'root' account in an Amazon Web Services (AWS) environment is monitored using Terraform, you can combine IAM policies with CloudTrail logs. Here's a step-by-step guide:
1. Create an IAM Group and Role for root account access:
First, create an IAM group and attach the necessary policies that allow the root user to perform required actions in your AWS environment. However, it is strongly recommended that you use IAM users or roles instead of the root account for day-to-day tasks. Here's a snippet of Terraform configuration for creating an IAM group and attaching a policy:
```hcl
resource "aws_iam_group" "example_group" {
name = "example_root_access_group"
description = "Example root access group."
}
loki:
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 9095
# -- Limits config
limits_config:
enforce_metric_name: false
@seeker815
seeker815 / sa.ts
Last active February 2, 2023 06:44
service account gap
const saApiAdmin = new gcp.serviceaccount.Account(`sa-apiadmin-${projectEnv}`, {
accountId: `sa-apiadmin-${projectEnv}`,
displayName: `A service account used for bucket access for API`,
});
const storageRWRole = new gcp.projects.IAMCustomRole(`role-api-storage-rw-${projectEnv}`, {
description: "Bucket/pubsub read write role",
permissions: [
"storage.objects.create",
@seeker815
seeker815 / ingress-patch.ts
Created January 25, 2023 21:00
setup ingress, issuer and patch with tls
// create a static Global IP to map to ingress
const globalIP = new gcp.compute.GlobalAddress(`api-global-${projectEnv}`, {
addressType: "EXTERNAL",
description: `Use for Ingress/Load balancer for backend-API, project {gcp.config.project}`,
} );
export const ingressGlobalIP = globalIP.address
// create certificate manager
const certNS = new k8s.core.v1.Namespace(`cert-manager-${projectEnv}`, {metadata: { name: `cert-manager-${projectEnv}` }}, { provider: clusterProvider });
const neo4jNS = new k8s.core.v1.Namespace(`neo4j-${projectEnv}`, {metadata: { name: `neo4j-${projectEnv}` }}, { provider: clusterProvider });
const core1 = new k8s.helm.v3.Release("core-1", {
chart: "neo4j-cluster-core",
repositoryOpts: {
repo: neo4jHelmRepository,
},
version: "4.4.15",
namespace: neo4jNS.metadata.name,
name: "core-1",
@seeker815
seeker815 / cluster.ts
Created January 22, 2023 18:53
Provision GKE cluster
import * as k8s from "@pulumi/kubernetes";
import * as pulumi from "@pulumi/pulumi";
import * as gcp from "@pulumi/gcp";
import * as certmanager from "@pulumi/kubernetes-cert-manager";
import { gkeClusterName, clusterNodeCount ,primaryNodeCount, secondaryNodeCount, nodeMachineType, secondaryNodeMachineType, clusterPoolIdentity, clusterLocation, clusterNetwork, clusterMasterCIDR, clusterPodIPCIDR, clusterSvcIPCIDR, clusterExtNetwork1, clusterExtNetwork2,clusterExtNetwork3, neo4jHelmChart, neo4jReleaseName, neo4jHelmRepository, neo4jChartVersion, neo4jURI, apiNodeENV, apiMemLimits, apiImage, projectEnv, datadogAPIKey, issuerName, neo4jStorage } from "./config";
import { createClusterNeo4J } from './neo4j_cluster';
import { NetworkPeering } from "@pulumi/gcp/compute";
import { local } from "@pulumi/command";
import { project } from "@pulumi/gcp/config";
import { CertManager } from "@pulumi/kubernetes-cert-manager";
const neo4jNS = new k8s.core.v1.Namespace(`neo4j-${projectEnv}`, {metadata: { name: `neo4j-${projectEnv}` }}, { provider: clusterProvider });
const core1 = new k8s.helm.v3.Chart("core-1", {
chart: "neo4j-cluster-core",
fetchOpts: {
repo: "https://helm.neo4j.com/neo4j",
},
version: "4.4.15",
namespace: neo4jNS.metadata.name,
values: {
const ingress = new k8s.networking.v1.Ingress(`api-ingress-${projectEnv}`, {
metadata: {
namespace: appNs.metadata.name,
annotations: {
"kubernetes.io/ingress.class": "gce",
"kubernetes.io/ingress.allow-http": "true",
"kubernetes.io/ingress.global-static-ip-name": webIP.name,
"cert-manager.io/issuer": "letsencrypt-dev-api",
},
name: `api-ingress-${projectEnv}`,
@seeker815
seeker815 / connect-k8s.md
Created April 18, 2022 12:22 — forked from Piotr1215/connect-k8s.md
Keep cluster connections separate

How to keep cluster connections cleanly separated

With time the .kube/config file will contain a mix of dev, test and prod cluster references. It is easy to forget switching off from a prod cluster context and make a mistake and run for example kubectl delete ns crossplane-system.

Direnv based setup

Use the following setup to avoid these kinds of errors and keep clusters separate.

Install direnv