-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No metadata for search results #84
Comments
Thanks for the comment. As you indicated, currently metadata is excluded while parsing the results for search API. I'm starting to wonder if certain meta-option can be added to allow access to the If you have any opinions regarding interface, I appreciate if could share some (I'll be thinking some more). prev_response = ExTwitter.search("pizza", [count: 100, search_metadata: true]) response = ExTwitter.search_next_page(prev_response.metadata) defmodule Searcher do
def search_next_page(prev_response, index) do
IO.puts("Fetching page " <> to_string(index))
response = ExTwitter.search_next_page(prev_response.metadata)
if response != nil do
prev_response.statuses ++ search_next_page(response, index + 1)
else
prev_response.statuses
end
end
end
response = ExTwitter.search("pizza", [count: 100, search_metadata: true])
tweets = Searcher.search_next_page(response, 1) |
I enjoy this interface idea. My design was going to include a struct for Then One issue I have been having is using the |
What about fetching other things that require paging? For instance right now it's not possible not fetch beyond 200 items (this is per Twitter API design) when fetching favorited tweets: length(ExTwitter.favorites(count: 1000))
# => 200 This means it's technically possible to fetch all favorites, but would require to do this manually. @parroty would your proposed design cover this case as well, or is it only suited for |
@Fallenstedt Thanks for the comment and sorry being late to respond.
As current @gmile Thanks for the comment. If writing iterative logic (like the above example) is acceptable, I think the code like the following would correspond to the defmodule FavoritesSearcher do
def run(options) do
do_run(options, 1)
end
defp do_run(options, index) do
favorites = ExTwitter.favorites(options)
IO.puts("Fetched page " <> to_string(index) <> " with " <> to_string(Enum.count(favorites)) <> " tweets by max_id " <> to_string(Keyword.get(options, :max_id, nil)))
if Enum.count(favorites) > 0 do
favorites ++ do_run(Keyword.merge(options, [max_id: List.last(favorites).id - 1]), index + 1)
else
favorites
end
end
end
favorites = FavoritesSearcher.run(screen_name: "justinbieber", count: 200) |
@parroty nice, thanks for the code excerpt! I'll try and rely my implementation on it. Generally, do you think I'm trying to think of cases where manually pulling additional items is appropriate. When a user would want to run his code inbetween API calls done by I think what really matters for end user, e.g. his intention is, to just get the N results he requested (be it 5, 200 or 1000) and not leverage a pull-check-pull-again mechanism by himself. From looking at the reference, I see that different GET calls rely on different |
Twitter's API allows you to receive metadata for search results.
Using
ExTwitter.search
I am able to search for tweets. As an example, I can search for 120 tweets about pizza near Portland with:Running this in console, we can see that the amount of tweets returned is 100 with
a = MyModule.search("pizza", 120, 500) |> Enum.count
No where in this list is any search metadata that includes a
next_results
token for me to obtain the next twenty tweets. In the Twitter API, we should have meta data that might look like thisMy suspicion is that when the results are parsed, we are excluding this metadata.. I've forked this library and tested searching without parsing, and I have access to this metadata.
Is there currently a way we can parse the json to include the
search_metadata
? If not, how can I contribute? I would love to have a feature that allows me to page through my data, because right now I am locked to only getting 100 results when I may need thousands.The text was updated successfully, but these errors were encountered: