I'm building a Ruby on Rails app that access about 6-7 APIs, grabs information from them based on user's input, compares and display results to the users (the information is not saved in the database). I will be using Heroku to deploy the app. I would like those HTTP requests to access the APIs to be done in parallel so the answer time is better instead of doing it sequential. What do you think is the best way to achieve this in Heroku?
我正在构建一个Ruby on Rails应用程序,可以访问大约6-7个API,根据用户的输入从中获取信息,比较并向用户显示结果(信息不保存在数据库中)。我将使用Heroku来部署应用程序。我希望那些HTTP请求能够并行完成访问API,因此答案时间更好,而不是顺序完成。您认为在Heroku中实现这一目标的最佳方法是什么?
Thank you very much for any suggestions!
非常感谢您的任何建议!
4 个解决方案
#1
6
If you want to actually do the requests on the server side (tfe's javascript solution is a good idea), your best bet would be using EventMachine. Using EventMachine gives a simple way to do non-blocking IO.
如果你想在服务器端实际执行请求(tfe的javascript解决方案是一个好主意),你最好的选择是使用EventMachine。使用EventMachine提供了一种执行非阻塞IO的简单方法。
Also check out EM-Synchrony for a set of Ruby 1.9 fiber aware clients (including HTTP).
另请查看EM-Synchrony以获取一组Ruby 1.9光纤感知客户端(包括HTTP)。
All you need to do for a non-blocking HTTP request is something like:
您需要为非阻塞HTTP请求执行的操作如下:
require "em-synchrony"
require "em-synchrony/em-http"
EM.synchrony do
concurrency = 2
urls = ['http://url.1.com', 'http://url2.com']
# iterator will execute async blocks until completion, .each, .inject also work!
results = EM::Synchrony::Iterator.new(urls, concurrency).map do |url, iter|
# fire async requests, on completion advance the iterator
http = EventMachine::HttpRequest.new(url).aget
http.callback { iter.return(http) }
http.errback { iter.return(http) }
end
p results # all completed requests
EventMachine.stop
end
Goodluck!
#2
2
You could always make the requests client-side using Javascript. Then not only can you run them in parallel, but you won't even need the round-trip to your own server.
您总是可以使用Javascript在客户端进行请求。那么你不仅可以并行运行它们,而且甚至不需要往返自己的服务器。
#3
1
I haven't tried parallelizing requests like that. But I've tried parallel on heroku, works like a charm! This is my simple blog post about it.
我没有尝试并行化这样的请求。但是我在heroku上尝试过并行,就像一个魅力!这是我关于它的简单博文。
#4
0
Have a look at creating each request as a background job: http://blog.heroku.com/archives/2009/7/15/background_jobs_with_dj_on_heroku/
看看将每个请求创建为后台作业:http://blog.heroku.com/archives/2009/7/15/background_jobs_with_dj_on_heroku/
The more 'Workers' you buy from Heroku, the more background jobs can be processed concurrently, leaving your 'Dynos' to serve your users.
您从Heroku购买的“工人”越多,可以同时处理的后台工作越多,您的“Dynos”就会为您的用户提供服务。
#1
6
If you want to actually do the requests on the server side (tfe's javascript solution is a good idea), your best bet would be using EventMachine. Using EventMachine gives a simple way to do non-blocking IO.
如果你想在服务器端实际执行请求(tfe的javascript解决方案是一个好主意),你最好的选择是使用EventMachine。使用EventMachine提供了一种执行非阻塞IO的简单方法。
Also check out EM-Synchrony for a set of Ruby 1.9 fiber aware clients (including HTTP).
另请查看EM-Synchrony以获取一组Ruby 1.9光纤感知客户端(包括HTTP)。
All you need to do for a non-blocking HTTP request is something like:
您需要为非阻塞HTTP请求执行的操作如下:
require "em-synchrony"
require "em-synchrony/em-http"
EM.synchrony do
concurrency = 2
urls = ['http://url.1.com', 'http://url2.com']
# iterator will execute async blocks until completion, .each, .inject also work!
results = EM::Synchrony::Iterator.new(urls, concurrency).map do |url, iter|
# fire async requests, on completion advance the iterator
http = EventMachine::HttpRequest.new(url).aget
http.callback { iter.return(http) }
http.errback { iter.return(http) }
end
p results # all completed requests
EventMachine.stop
end
Goodluck!
#2
2
You could always make the requests client-side using Javascript. Then not only can you run them in parallel, but you won't even need the round-trip to your own server.
您总是可以使用Javascript在客户端进行请求。那么你不仅可以并行运行它们,而且甚至不需要往返自己的服务器。
#3
1
I haven't tried parallelizing requests like that. But I've tried parallel on heroku, works like a charm! This is my simple blog post about it.
我没有尝试并行化这样的请求。但是我在heroku上尝试过并行,就像一个魅力!这是我关于它的简单博文。
#4
0
Have a look at creating each request as a background job: http://blog.heroku.com/archives/2009/7/15/background_jobs_with_dj_on_heroku/
看看将每个请求创建为后台作业:http://blog.heroku.com/archives/2009/7/15/background_jobs_with_dj_on_heroku/
The more 'Workers' you buy from Heroku, the more background jobs can be processed concurrently, leaving your 'Dynos' to serve your users.
您从Heroku购买的“工人”越多,可以同时处理的后台工作越多,您的“Dynos”就会为您的用户提供服务。