如何使用AWS CloudFormation模板将AWS S3 Bucket映射到AWS ECS上的容器中?

时间:2021-06-22 10:47:45

I'm working on a VOIP project using Asterisk on Linux. Our current goal is to have several EC2 machines running an Asterisk container on each of them, and we want to be able to have development, staging and production environments. To do this, I'm writing a CloudFormation template to use AWS-ECS. My problem is that I can't find the proper way to map AWS-S3 buckets into container volumes. I want to use 2 different buckets. One for injecting Asterisk config files into all containers. Another one to save voice messages and logs of all containers.

我正在使用Linux上的Asterisk进行VOIP项目。我们目前的目标是让几台EC2机器在每台机器上运行Asterisk容器,我们希望能够拥有开发,升级和生产环境。为此,我正在编写CloudFormation模板以使用AWS-ECS。我的问题是我找不到将AWS-S3存储桶映射到容器卷的正确方法。我想使用2个不同的桶。一个用于将Asterisk配置文件注入所有容器。另一个保存所有容器的语音消息和日志。

Thanks,

谢谢,

P.S. I've pushed my Asterisk image on AWS-ECR and referenced to it in a TaskDefenition. I see MountPoints and Volumes there, but they doesn't seem to be my solution.

附:我在AWS-ECR上推送了我的Asterisk映像,并在TaskDefenition中引用它。我在那里看到MountPoints和Volumes,但它们似乎不是我的解决方案。

2 个解决方案

#1


1  

Could you try using environment variable in your task definations?

您可以尝试在任务定义中使用环境变量吗?

in CF template it would be like this:

在CF模板中它会是这样的:

"DefJob":{
     "Type":"AWS::ECS::TaskDefinition",
     "Properties":{
        "ContainerDefinitions":[
           {
              "Name":"integration-jobs",
              "Cpu":"3096",
              "Essential":"true",
              "Image":"828387064194.dkr.ecr.us-east-1.amazonaws.com/poblano:integration",
              "Memory":"6483",
              "Environment":[
                 {
                    "Name":"S3_REGION",
                    "Value":"us-east-1"
                 },
                 {
                     "Name":"S3_BUCKET"
                     "Value":"Name-of-S3"
                  }
                  ........

And then reference these environment variable in your containers to use these S3 buckets. You'll have to make sure that your instance has access to use these S3 buckets.

然后在容器中引用这些环境变量以使用这些S3存储桶。您必须确保您的实例有权使用这些S3存储桶。

Thanks, Manish

谢谢,Manish

#2


0  

I know it doesn't answer exactly to this issue, this is more basic than Manish's solution, but a raw way to achieve shared storage between the ECS containers is to rely on Elastic File Systems.

我知道它并没有完全回答这个问题,这比Manish的解决方案更基本,但是在ECS容器之间实现共享存储的一种原始方法是依赖弹性文件系统。

By setting such a script in the User Data of the Docker instances, or auto scaling group launch configuration, the EFS can be mounted on /mnt/efs of every Docker instance, thus sharing volumes set to something like /mnt/efs/something.

通过在Docker实例的用户数据中设置此类脚本,或自动扩展组启动配置,可以将EFS安装在每个Docker实例的/ mnt / efs上,从而共享卷设置为/ mnt / efs / something。

For this, this User Data block does the job (we use it with Amazon ECS Optimized images).

为此,此用户数据块完成工作(我们将其与Amazon ECS优化图像一起使用)。

Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0

--==BOUNDARY==
MIME-Version: 1.0
Content-Type: text/text/x-shellscript; charset="us-ascii"
#!/bin/bash
yum install -y nfs-utils
mkdir "/mnt/efs"
echo "us-east-1a.fs-1234567.efs.us-east-1.amazonaws.com:/ /mnt/efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0" >> /etc/fstab
mount -a
/etc/init.d/docker restart
docker start ecs-agent
--==BOUNDARY==--

Docker is restarted at the end, otherwise it doesn't see the mounted volume (necesary only on instance creation).

Docker在最后重新启动,否则它不会看到已安装的卷(仅在实例创建时需要)。

Of course, for this to work, the security groups must be set to allow the instances and EFS to communicate through the NFS network port (2049).

当然,为了使其工作,必须将安全组设置为允许实例和EFS通过NFS网络端口进行通信(2049)。

#1


1  

Could you try using environment variable in your task definations?

您可以尝试在任务定义中使用环境变量吗?

in CF template it would be like this:

在CF模板中它会是这样的:

"DefJob":{
     "Type":"AWS::ECS::TaskDefinition",
     "Properties":{
        "ContainerDefinitions":[
           {
              "Name":"integration-jobs",
              "Cpu":"3096",
              "Essential":"true",
              "Image":"828387064194.dkr.ecr.us-east-1.amazonaws.com/poblano:integration",
              "Memory":"6483",
              "Environment":[
                 {
                    "Name":"S3_REGION",
                    "Value":"us-east-1"
                 },
                 {
                     "Name":"S3_BUCKET"
                     "Value":"Name-of-S3"
                  }
                  ........

And then reference these environment variable in your containers to use these S3 buckets. You'll have to make sure that your instance has access to use these S3 buckets.

然后在容器中引用这些环境变量以使用这些S3存储桶。您必须确保您的实例有权使用这些S3存储桶。

Thanks, Manish

谢谢,Manish

#2


0  

I know it doesn't answer exactly to this issue, this is more basic than Manish's solution, but a raw way to achieve shared storage between the ECS containers is to rely on Elastic File Systems.

我知道它并没有完全回答这个问题,这比Manish的解决方案更基本,但是在ECS容器之间实现共享存储的一种原始方法是依赖弹性文件系统。

By setting such a script in the User Data of the Docker instances, or auto scaling group launch configuration, the EFS can be mounted on /mnt/efs of every Docker instance, thus sharing volumes set to something like /mnt/efs/something.

通过在Docker实例的用户数据中设置此类脚本,或自动扩展组启动配置,可以将EFS安装在每个Docker实例的/ mnt / efs上,从而共享卷设置为/ mnt / efs / something。

For this, this User Data block does the job (we use it with Amazon ECS Optimized images).

为此,此用户数据块完成工作(我们将其与Amazon ECS优化图像一起使用)。

Content-Type: multipart/mixed; boundary="==BOUNDARY=="
MIME-Version: 1.0

--==BOUNDARY==
MIME-Version: 1.0
Content-Type: text/text/x-shellscript; charset="us-ascii"
#!/bin/bash
yum install -y nfs-utils
mkdir "/mnt/efs"
echo "us-east-1a.fs-1234567.efs.us-east-1.amazonaws.com:/ /mnt/efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0" >> /etc/fstab
mount -a
/etc/init.d/docker restart
docker start ecs-agent
--==BOUNDARY==--

Docker is restarted at the end, otherwise it doesn't see the mounted volume (necesary only on instance creation).

Docker在最后重新启动,否则它不会看到已安装的卷(仅在实例创建时需要)。

Of course, for this to work, the security groups must be set to allow the instances and EFS to communicate through the NFS network port (2049).

当然,为了使其工作,必须将安全组设置为允许实例和EFS通过NFS网络端口进行通信(2049)。