轮询多个Twitter帐户以获得推文印象和喜欢

时间:2022-06-23 12:02:46

I'm currently working on a use case (developed in Java,Spring) where I've large no. of twitter accounts (no. of accounts can go to thousand) to which I can post data(tweet) as when configured/scheduled.

我目前正在研究一个用例(在Java,Spring开发),其中我没有。 Twitter帐户(帐户数量可以达到千)我可以在配置/计划时发布数据(推文)。

I've implemented posting of data to twitter but I'm confused how to pull impressions/retweets and likes of tweets from various twitter accounts.

我已经实现了向Twitter发布数据但我很困惑如何从各种推特账户中提取印象/转推和喜欢的推文。

One solution is to poll all accounts on regular interval, but in that case I won't be getting no of likes on tweet made, because I'm using user and mentions timeline APIs with "since_id" parameter, which do not return no of likes on my older tweets as it always fetches latest tweet and retweet.

一个解决方案是定期轮询所有帐户,但在这种情况下,我不会在推文上得到任何喜欢,因为我正在使用用户并提到带有“since_id”参数的时间轴API,这些参数不会返回喜欢我的旧推文,因为它总是提取最新的推文和转发。

Another option is to use streaming APIs, in which I will be opening a stream for every twitter account I have but that doesn't seems feasible to me because I have very large no. of twitter accounts with me and I doubt that my Java app can handle that many no. of streams.

另一种选择是使用流式API,我将在每个Twitter帐户中打开一个流,但这对我来说似乎不可行,因为我有非常大的号码。 Twitter的帐号跟我一起,我怀疑我的Java应用程序可以处理那么多没有。溪流

Can someone please suggest how can I solve this, any help is greatly appreciated.

有人可以建议我如何解决这个问题,非常感谢任何帮助。

1 个解决方案

#1


0  

IT seems your problem is due to scale rather then the design ,the statement "and I doubt that my Java app can handle that many no. of streams."

看来你的问题是由于规模而不是设计,声明“而且我怀疑我的Java应用程序可以处理那么多的流。”

let's look in a different direction.

让我们看一个不同的方向。

It's time to move to the world of "Big Data". Apache kafa,Pig,Hive,Yarn,Strom,HBase,Hadoop etc.list is overwhelming.

现在是时候走向“大数据”的世界了。 Apache kafa,Pig,Hive,Yarn,Strom,HBase,Hadoop等等都是压倒性的。

  1. Apache Spark- large-scale data processing that and supports concepts such as MapReduce, in-memory processing, stream processing, graph processing etc.
  2. Apache Spark-大规模数据处理,支持MapReduce,内存处理,流处理,图形处理等概念。
  3. Storm was created by Twitter the counter part you can say is Apache storm.
  4. Storm是由Twitter创建的,你可以说是Apache风暴。
  5. Apache Kafka it offers brokers that collect streams, log and buffer them in a fault tolerant manner.
  6. Apache Kafka它提供了以容错方式收集流,记录和缓冲它们的代理。
  7. Hadoop for storage of the data. http://www.itworld.com/article/2827285/big-data/what-hadoop-can--and-can-t-do.html
  8. Hadoop用于存储数据。 http://www.itworld.com/article/2827285/big-data/what-hadoop-can--and-can-t-do.html

happy designing.

快乐的设计。

#1


0  

IT seems your problem is due to scale rather then the design ,the statement "and I doubt that my Java app can handle that many no. of streams."

看来你的问题是由于规模而不是设计,声明“而且我怀疑我的Java应用程序可以处理那么多的流。”

let's look in a different direction.

让我们看一个不同的方向。

It's time to move to the world of "Big Data". Apache kafa,Pig,Hive,Yarn,Strom,HBase,Hadoop etc.list is overwhelming.

现在是时候走向“大数据”的世界了。 Apache kafa,Pig,Hive,Yarn,Strom,HBase,Hadoop等等都是压倒性的。

  1. Apache Spark- large-scale data processing that and supports concepts such as MapReduce, in-memory processing, stream processing, graph processing etc.
  2. Apache Spark-大规模数据处理,支持MapReduce,内存处理,流处理,图形处理等概念。
  3. Storm was created by Twitter the counter part you can say is Apache storm.
  4. Storm是由Twitter创建的,你可以说是Apache风暴。
  5. Apache Kafka it offers brokers that collect streams, log and buffer them in a fault tolerant manner.
  6. Apache Kafka它提供了以容错方式收集流,记录和缓冲它们的代理。
  7. Hadoop for storage of the data. http://www.itworld.com/article/2827285/big-data/what-hadoop-can--and-can-t-do.html
  8. Hadoop用于存储数据。 http://www.itworld.com/article/2827285/big-data/what-hadoop-can--and-can-t-do.html

happy designing.

快乐的设计。