After hosting website on s3, how can we make changes in text in its webpages. I deleted older html files from bucket and uploaded new files by same name with updated text in the code. But no changes were reflected after refreshing those webpages.
在s3上托管网站后,我们如何在其网页中更改文本。我从存储桶中删除了较旧的html文件,并在代码中使用相同的名称上传了新文件。但刷新这些网页后没有反映出任何变化。
Is there is any other way to update webpages of a website already hosted on s3 ? If so would somebody please post steps here to make those updates ? TIA.
是否还有其他方法可以更新已在s3上托管的网站的网页?如果是这样,有人请在这里发布步骤进行更新? TIA。
2 个解决方案
#1
10
I notice you have CloudFront in your tags so that is most likely the issue. When you upload a file to S3, CloudFront won't know about it right away if it's an existing file. Instead it's set to a default of 24 hours where it checks your origin (in this case your S3 bucket) to see if any changes have been made and if it needs to update the cache. There are a few ways to make it update the cache for those files:
我注意到您的标签中有CloudFront,因此很可能是问题所在。当您将文件上传到S3时,如果CloudFront是现有文件,它将不会立即知道它。相反,它设置为24小时的默认值,它检查您的原点(在这种情况下是您的S3存储桶),以查看是否已进行任何更改以及是否需要更新缓存。有几种方法可以更新这些文件的缓存:
- Using files with versions in their names, and updating links. The downside is that you have to make more changes than normal to get this to work.
- Invalidating the cache. This is not what Amazon recommends, but is nonetheless a quick way to make the cache pickup new changes right away. Note that there can be charges if you do a lot of invalidations:
使用名称中包含版本的文件,并更新链接。缺点是你必须做出比平常更多的更改才能使其正常工作。
使缓存无效。这不是亚马逊推荐的,但仍然是一种快速获取缓存立即获取新变化的方法。请注意,如果您进行了大量失效,可能会收费:
No additional charge for the first 1,000 paths requested for invalidation each month. Thereafter, $0.005 per path requested for invalidation
每月要求失效的前1,000条路径不收取额外费用。此后,每条路径请求$ 0.005无效
Here is where you can assign a path (individual file, folders, etc.) and adjust certain properties. One of them is the TTL(Time To Live) of the path in question. If you make the TTL a smaller value CloudFront will pickup changes more quickly. However since you have an S3 origin note that you'll have to deal with request allocations. Also CloudFront will need some time to distribute these changes to all the edge servers.
您可以在此处指定路径(单个文件,文件夹等)并调整某些属性。其中之一是相关路径的TTL(生存时间)。如果您将TTL设置为较小的值,CloudFront将更快地获取更改。但是,由于您有S3来源说明,您必须处理请求分配。此外,CloudFront还需要一些时间将这些更改分发到所有边缘服务器。
Hope this helps.
希望这可以帮助。
#2
-1
You don't need to delete the older files to update to new ones in S3. Use versioning to avoid accidental update to the objects.
您无需删除旧文件即可在S3中更新为新文件。使用版本控制以避免意外更新对象。
S3 basically have the following consistent model. 1. Read after write consistency for puts of new objects. 2. Eventual consistency for Overwrite of existing objects and Delete of objects.
S3基本上具有以下一致的模型。 1.读取新对象的写入一致性。 2.覆盖现有对象和删除对象的最终一致性。
#1
10
I notice you have CloudFront in your tags so that is most likely the issue. When you upload a file to S3, CloudFront won't know about it right away if it's an existing file. Instead it's set to a default of 24 hours where it checks your origin (in this case your S3 bucket) to see if any changes have been made and if it needs to update the cache. There are a few ways to make it update the cache for those files:
我注意到您的标签中有CloudFront,因此很可能是问题所在。当您将文件上传到S3时,如果CloudFront是现有文件,它将不会立即知道它。相反,它设置为24小时的默认值,它检查您的原点(在这种情况下是您的S3存储桶),以查看是否已进行任何更改以及是否需要更新缓存。有几种方法可以更新这些文件的缓存:
- Using files with versions in their names, and updating links. The downside is that you have to make more changes than normal to get this to work.
- Invalidating the cache. This is not what Amazon recommends, but is nonetheless a quick way to make the cache pickup new changes right away. Note that there can be charges if you do a lot of invalidations:
使用名称中包含版本的文件,并更新链接。缺点是你必须做出比平常更多的更改才能使其正常工作。
使缓存无效。这不是亚马逊推荐的,但仍然是一种快速获取缓存立即获取新变化的方法。请注意,如果您进行了大量失效,可能会收费:
No additional charge for the first 1,000 paths requested for invalidation each month. Thereafter, $0.005 per path requested for invalidation
每月要求失效的前1,000条路径不收取额外费用。此后,每条路径请求$ 0.005无效
Here is where you can assign a path (individual file, folders, etc.) and adjust certain properties. One of them is the TTL(Time To Live) of the path in question. If you make the TTL a smaller value CloudFront will pickup changes more quickly. However since you have an S3 origin note that you'll have to deal with request allocations. Also CloudFront will need some time to distribute these changes to all the edge servers.
您可以在此处指定路径(单个文件,文件夹等)并调整某些属性。其中之一是相关路径的TTL(生存时间)。如果您将TTL设置为较小的值,CloudFront将更快地获取更改。但是,由于您有S3来源说明,您必须处理请求分配。此外,CloudFront还需要一些时间将这些更改分发到所有边缘服务器。
Hope this helps.
希望这可以帮助。
#2
-1
You don't need to delete the older files to update to new ones in S3. Use versioning to avoid accidental update to the objects.
您无需删除旧文件即可在S3中更新为新文件。使用版本控制以避免意外更新对象。
S3 basically have the following consistent model. 1. Read after write consistency for puts of new objects. 2. Eventual consistency for Overwrite of existing objects and Delete of objects.
S3基本上具有以下一致的模型。 1.读取新对象的写入一致性。 2.覆盖现有对象和删除对象的最终一致性。