Skip to content

Instantly share code, notes, and snippets.

@jflasher
Created February 1, 2019 15:21
Show Gist options
  • Star 8 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save jflasher/1a97b9f3f7e2bbf232086e3ca31119e1 to your computer and use it in GitHub Desktop.
Save jflasher/1a97b9f3f7e2bbf232086e3ca31119e1 to your computer and use it in GitHub Desktop.
Instructions for setting up a lifecycle policy for S3 Intelligent-Tiering

Amazon S3 now supports a new storage class called Intelligent Tiering. This will be a very attractive option for many customers who have datasets that are accessed in unpredictable patterns. You can set this storage class when uploading any new data. However, the below instructions will allow you to set up a lifecycle policy that will change the storage class of data that already exists in your bucket.

To set up a lifecycle policy to change the storage class of the data currently stored in your Amazon S3 bucket, follow the below steps.

  1. Visit the S3 console and go to your bucket of interest.

  2. Click on the Management tab at the top and select + Add lifecycle rule.

  3. Enter a rule name of your choice (e.g., Convert to Intelligent Tiering storage class). Unless you want to filter which data is converted to the new storage class, you can leave the prefix/tag filter field empty.

  4. Click Next.

  5. Select the Current Version checkbox unless you have previously enabled versioning. Click the + Add transition link that appears.

  6. Select Transition to Intelligent-Tiering after in the dropdown box. You can leave the Days after creation value at its default of 0.

  7. Click Next.

  8. We do not need to set any expirations at this time. Click Next.

  9. You can review your settings and if all looks well, click Save.

You now have a lifecycle policy in place that will move any existing data (and new data that comes in) to the Intelligent-Tiering storage class.

@tsulatsitamim
Copy link

Hi, why I still see standard class when upload file, i upload to s3 through aws s3 console. Thank you.

@jflasher
Copy link
Author

Hi @tsulatsitamim, the instructions above are for setting a Lifecycle Policy. If you're uploading via the console, you can set the S3-IT storage class right away. Keep in mind objects smaller than 128KB will remain in the Standard storage class.

@lwang1309
Copy link

Hi! What's the difference between setting up Intelligent-Tiering Archive configurations under Properties and creating a lifecycle?

@jflasher
Copy link
Author

A lifecycle policy is a useful way to get the property set on all existing objects in a bucket as they will age into the policy after the time you set (after 0 days in the example above). If you have any empty bucket and are adding new objects, just setting bucket properties will work fine.

@lwang1309
Copy link

A lifecycle policy is a useful way to get the property set on all existing objects in a bucket as they will age into the policy after the time you set (after 0 days in the example above). If you have any empty bucket and are adding new objects, just setting bucket properties will work fine.

Thank you for your response! If I already have life policy set up, when I upload new objects to the bucket should I chose standard or IT for storage class?

@jflasher
Copy link
Author

jflasher commented Jan 20, 2021 via email

@fieldse
Copy link

fieldse commented Oct 17, 2021

Extremely helpful. Thanks :-)

@reece
Copy link

reece commented Oct 30, 2021

Here's an example of how to do this with boto3:

import boto3

lifecycle_configuration = {
    "Rules": [{"ID": "Intelligent Tiering",
               "Filter": {},
               "Status": "Enabled",
               "Transitions": [{"Days": 0, "StorageClass": "INTELLIGENT_TIERING"}],
               "NoncurrentVersionTransitions": [{"NoncurrentDays": 0,
                                                 "StorageClass": "INTELLIGENT_TIERING"}]
               }]
session = boto3.session.Session(profile_name=profile_name)  # profile defined in ~/.aws/credentials
s3_client = session.client("s3")
bucket = "some-bucket"
s3_client.put_bucket_lifecycle_configuration(
    Bucket=bucket,
    LifecycleConfiguration=lifecycle_configuration)

@HiteshGupta41420
Copy link

i uploaded an object more than 128kb and i can still see it in Standard Storage class

@jflasher
Copy link
Author

jflasher commented Feb 7, 2022

Hi @HiteshGupta41420, https://docs.aws.amazon.com/AmazonS3/latest/userguide/intelligent-tiering-overview.html is likely a good resource if you haven't looked over it yet. If you uploaded something in the IT class, it will take some time (30 days) to move into another storage class. That could possibly be what you're seeing.

@HiteshGupta41420
Copy link

Hi @jflasher, thanks for the response. I created a life cycle policy to transition the object from standard storage class to Intelligent tiering in 0 days. But I am not able to see that happening. once the object is in IT, then it should take 30 days.

The object is uploaded into Standard and should go to IT in 0 days, as per the life cycle rule.

Could you please tell me what I am missing here?

@jflasher
Copy link
Author

Hi @HiteshGupta41420, from the docs, looks like you may be able to do a HEAD request to see the actual status of the object in case it differs from what you're seeing somewhere else. Unfortunately, I'm not able to provide any more insight than that.

@qinjie
Copy link

qinjie commented Sep 20, 2022

@HiteshGupta41420, it is most likely because life cycle policy is not applied in real time. Give it around 48 hours before you check again. Life cycle policy performs rules valuation in UTC midnight, then perform transition in the next UTC midnight.

@whiplash98
Copy link

I have my existing s3 bucket and i added lifecycle policy for intelligent tier with 0 days delay. but still my objects showing as standard tier why ?

@MrCsabaToth
Copy link

Is it possible to configure Intelligent Tiering to only go as far as Archive Instant Access (AIA) and don't move stuff to higher latency Archive tiers? If yes where?

@jflasher
Copy link
Author

jflasher commented Mar 5, 2024

Hi @MrCsabaToth I'd recommend checking out the docs or re:Post. This is not something I've looked into for quite some time at this point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment