使用aws-sdk将gm调整大小的图像上传到s3

时间:2023-01-29 23:03:49

so what i want to do is to stream an image from a url, process it with graphicsmagick and stream-upload it to s3. i just dont get it working.

所以我想要做的是从网址流式传输图像,使用graphicsmagick处理它并将其上传到s3。我只是不让它工作。

streaming the processed image to local disk (using fs.createWriteStream) works without a problem.

将处理后的图像流式传输到本地磁盘(使用fs.createWriteStream)可以正常工作。

when i buffer my stream, the final image in s3 has at least the expected size (kb-wise), but i can not open that image.

当我缓冲我的流时,s3中的最终图像至少具有预期的大小(kb-wise),但我无法打开该图像。

thats my current progress:

这就是我目前的进步:

var request = require('request');

var gm = require("gm");

var AWS = require('aws-sdk');

var mime = require('mime');

var s3 = new AWS.S3();

gm(request('http://www.some-domain.com/some-image.jpg'), "my-image.jpg")
  .resize("100^", "100^")
  .stream(function(err, stdout, stderr) {
    var str = '';
    stdout.on('data', function(data) {
       str += data;
    });
    stdout.on('end', function(data) {
      var data = {
        Bucket: "my-bucket",
        Key: "my-image.jpg",
        Body: new Buffer(str, 'binary'), // thats where im probably wrong
        ContentType: mime.lookup("my-image.jpg")
      };
      s3.client.putObject(data, function(err, res) {
        console.log("done");
      });
    });
  });

i did try some stuff like creating a filewritestream and filereadstream, but i think there should be some cleaner an nicer solution to that problem...

我确实尝试过一些像创建filewritestream和filereadstream的东西,但我认为应该有一些更清洁的解决方案来解决这个问题......

EDIT: the first thing i tried was setting the Body to stdout (the suggested answer from @AndyD):

编辑:我尝试的第一件事是将Body设置为stdout(来自@AndyD的建议答案):

var data = {
    Bucket: "my-bucket",
    Key: "my-image.jpg",
    Body: stdout,
    ContentType: mime.lookup("my-image.jpg")
  };

but that returns following error:

但是返回以下错误:

Cannot determine length of [object Object]'

EDIT2:

EDIT2:

  • nodeversion: 0.8.6 (i also tried 0.8.22 and 0.10.0)
  • nodeversion:0.8.6(我也试过0.8.22和0.10.0)
  • aws-sdk: 0.9.7-pre.8 (installed today)
  • aws-sdk:0.9.7-pre.8(今天安装)

the complete err:

完整的错误:

{ [Error: Cannot determine length of [object Object]]
  message: 'Cannot determine length of [object Object]',
  object:
  { _handle:
   { writeQueueSize: 0,
    owner: [Circular],
    onread: [Function: onread] },
 _pendingWriteReqs: 0,
 _flags: 0,
 _connectQueueSize: 0,
 destroyed: false,
 errorEmitted: false,
 bytesRead: 0,
 _bytesDispatched: 0,
 allowHalfOpen: undefined,
 writable: false,
 readable: true,
 _paused: false,
 _events: { close: [Function], error: [Function: handlerr] } },
name: 'Error' }

1 个解决方案

#1


13  

you don't need to read the stream yourself (in your case you seem to be converting from binary to string and back due to var str='' and then appending data which is a binary buffer etc...

您不需要自己读取流(在您的情况下,您似乎是从二进制转换为字符串并返回由于var str =''然后附加数据,这是一个二进制缓冲区等...

Try letting putObject pipe the stream like this:

尝试让putObject管道像这样:

gm(request('http://www.some-domain.com/some-image.jpg'), "my-image.jpg")
  .resize("100^", "100^")
  .stream(function(err, stdout, stderr) {
      var data = {
        Bucket: "my-bucket",
        Key: "my-image.jpg",
        Body: stdout
        ContentType: mime.lookup("my-image.jpg")
      };
      s3.client.putObject(data, function(err, res) {
        console.log("done");
      });
    });
  });

See these release notes for more info.

有关详细信息,请参阅这些发行说明

If streaming/pipe doesn't work then something like this might which will load everything into memory and then upload. You're limited to 4Mb I think in this case.

如果流/管道不起作用,那么这样的东西可能会将所有内容加载到内存中然后上传。在这种情况下,我认为你只限于4Mb。

    var buf = new Buffer('');
    stdout.on('data', function(data) {
       buf = Buffer.concat([buf, data]);
    });
    stdout.on('end', function(data) {
      var data = {
        Bucket: "my-bucket",
        Key: "my-image.jpg",
        Body: buf,
        ContentType: mime.lookup("my-image.jpg")
      };
      s3.client.putObject(data, function(err, res) {
        console.log("done");
      });
    });

#1


13  

you don't need to read the stream yourself (in your case you seem to be converting from binary to string and back due to var str='' and then appending data which is a binary buffer etc...

您不需要自己读取流(在您的情况下,您似乎是从二进制转换为字符串并返回由于var str =''然后附加数据,这是一个二进制缓冲区等...

Try letting putObject pipe the stream like this:

尝试让putObject管道像这样:

gm(request('http://www.some-domain.com/some-image.jpg'), "my-image.jpg")
  .resize("100^", "100^")
  .stream(function(err, stdout, stderr) {
      var data = {
        Bucket: "my-bucket",
        Key: "my-image.jpg",
        Body: stdout
        ContentType: mime.lookup("my-image.jpg")
      };
      s3.client.putObject(data, function(err, res) {
        console.log("done");
      });
    });
  });

See these release notes for more info.

有关详细信息,请参阅这些发行说明

If streaming/pipe doesn't work then something like this might which will load everything into memory and then upload. You're limited to 4Mb I think in this case.

如果流/管道不起作用,那么这样的东西可能会将所有内容加载到内存中然后上传。在这种情况下,我认为你只限于4Mb。

    var buf = new Buffer('');
    stdout.on('data', function(data) {
       buf = Buffer.concat([buf, data]);
    });
    stdout.on('end', function(data) {
      var data = {
        Bucket: "my-bucket",
        Key: "my-image.jpg",
        Body: buf,
        ContentType: mime.lookup("my-image.jpg")
      };
      s3.client.putObject(data, function(err, res) {
        console.log("done");
      });
    });