django压缩机,heroku, s3:请求已经过期

时间:2022-11-26 23:02:26

I am using django-compressor on heroku with amazon s3 serving static files and I keep running into the following error with the compressor generated links to static files. I am totally new to compressor and s3:

我正在heroku上使用django压缩器,amazon s3提供静态文件,我在压缩器生成到静态文件的链接时不断遇到以下错误。我对压缩机和s3完全陌生:

https://xxx.s3.amazonaws.com/static/CACHE/css/989a3bfc8147.css?Signature=tBJBLUAWoA2xjGlFOIu8r3SPI5k%3D&Expires=1365267213&AWSAccessKeyId=AKIAJCWU6JPFNTTJ77IQ

<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<RequestId>FE4625EF498A9588</RequestId>
<Expires>2013-04-06T16:53:33Z</Expires>
<HostId>Fbjlk4eigroefpAsW0a533NOHgfQBG+WFRTJ392v2k2/zuG8RraifYIppLyTueFu</HostId>
<ServerTime>2013-04-06T17:04:41Z</ServerTime>
</Error>

I have two heroku servers configured, one for staging and one for production. They each have their own database and s3 bucket. They also share the same settings file, all unique settings are configured as environment vars. I have checked that the static files are in fact being pushed to their respective buckets.

我配置了两个heroku服务器,一个用于登台,一个用于生产。它们各自有自己的数据库和s3桶。它们还共享相同的设置文件,所有唯一的设置都配置为环境vars。我检查了静态文件实际上被推到它们各自的桶中。

compressor & s3 settings are as follows:

压缩机及s3设置如下:

COMPRESS_ENABLED = True
COMPRESS_STORAGE = STATICFILES_STORAGE 
COMPRESS_URL = STATIC_URL
COMPRESS_ROOT = STATIC_ROOT
COMPRESS_OFFLINE = False

AWS_ACCESS_KEY_ID = os.environ.get('AWS_ACCESS_KEY_ID')
AWS_SECRET_ACCESS_KEY = os.environ.get('AWS_SECRET_ACCESS_KEY')
AWS_STORAGE_BUCKET_NAME = os.environ.get('AWS_STORAGE_BUCKET_NAME')

Each time I push an update to heroku on staging or production, I eventually run into the above issue. Sometimes it happens after an hour, sometimes a day, sometimes a week, and sometimes as soon as an update is pushed out. The odd thing is that, if I push the same update to both environments, one will work and I will get the error on the other or they will both work at first and one will expire in an hour and the other will expire in a week.

每次我向heroku推送一个更新或生产的更新时,我最终都会遇到上面的问题。有时是在一个小时之后,有时是一天,有时是一周,有时是在更新发布之后。奇怪的是,如果我把相同的更新推到两个环境中,一个将会工作,我将会得到另一个的错误,或者他们将在一开始工作,一个将在一个小时内结束,另一个将在一个星期内失效。

I would really appreciate it if someone could explain what is going on. Obviously the Expires parameter is causing the problem, but why would the duration change with each push and what determines the amount of time? HOW DO YOU CHANGE THE EXPIRATION TIME? Please let me know if you need any more info.

如果有人能解释一下发生了什么,我会很感激的。显然,Expires参数导致了这个问题,但是为什么每次推送的持续时间会发生变化,是什么决定了时间的长短?如何更改过期时间?如果你需要更多的信息,请告诉我。

UPDATE: I temporarily solved the problem by setting AWS_QUERYSTRING_AUTH = False. There does not seem to be any way to set the EXPIRATION TIME in the query string, only using in the request header.

更新:我通过设置AWS_QUERYSTRING_AUTH = False暂时解决了这个问题。似乎没有任何方法可以在查询字符串中设置过期时间,只在请求头中使用。

2 个解决方案

#1


17  

Give this a try:

给这一个尝试:

AWS_QUERYSTRING_EXPIRE = 63115200

The value being number of seconds from the time the links are generated.

值为从生成链接开始的秒数。

#2


3  

Just in case somebody has this same issue:

以防有人遇到同样的问题:

AWS_QUERYSTRING_AUTH = False

This removes any of the expiry, etc. The expiration is not always needed based on use case (as in mine and many others). This will allow you to remove any expiry.

这将删除任何过期,等等。根据用例(在我的和其他许多用例中),过期并不总是需要的。这将允许您删除任何过期。

#1


17  

Give this a try:

给这一个尝试:

AWS_QUERYSTRING_EXPIRE = 63115200

The value being number of seconds from the time the links are generated.

值为从生成链接开始的秒数。

#2


3  

Just in case somebody has this same issue:

以防有人遇到同样的问题:

AWS_QUERYSTRING_AUTH = False

This removes any of the expiry, etc. The expiration is not always needed based on use case (as in mine and many others). This will allow you to remove any expiry.

这将删除任何过期,等等。根据用例(在我的和其他许多用例中),过期并不总是需要的。这将允许您删除任何过期。