A good amount of the work that I do involves using social media content for analyzing networks, sentiment, influencers and other various types of analysis.
In order to do this type of analysis, you first need to have some data to analyze. You can also scrape websites like Twitter or Facebook using simple web scrapers, but I’ve always found it easier to use the API’s that these companies / websites provide to pull down data.
The Twitter Streaming API is ideal for grabbing data in real-time and storing it for analysis. Twitter also has a search API that lets you pull down a certain number of historical tweets (I think I read it was the last 1,000 tweets…but its been a while since I’ve looked at the Search API). I’m a fan of the Streaming API because it lets me grab a much larger set of data than the Search API, but it requires you to build a script that ‘listens’ to the API for your required keywords and then store those tweets somewhere for later analysis.
There are tons of ways to connect up to the Streaming API. There are also quite a few Twitter API wrappers for Python (and most of them work very well). I tend to use Tweepy more than others due to its ease of use and simple structure. Additionally, if I’m working on a small / short-term project, I tend to reach for MongoDB to store the tweets using the PyMongo module. For larger / longer-term projects I usually connect the streaming API script to MySQL instead of MongoDB simply because MySQL fits into my ecosystem of backup scripts, etc better than MongoDB does. MongoDB is perfectly suited for this type of work for larger projects…I just tend to swing toward MySQL for those projects.
For this post, I wanted to share my script for collecting Tweets from the Twitter API and storing them into MongoDB.
Note: This script is a mashup of many other scripts I’ve found on the web over the years. I don’t recall where I found the pieces/parts of this script but I don’t want to discount the help I had from other people / sites in building this script.
Collecting / Storing Tweets with Python and MongoDB
Let’s set up our imports:
from __future__ import print_function import tweepy import json from pymongo import MongoClient
Next, set up your mongoDB path:
MONGO_HOST= 'mongodb://localhost/twitterdb' # assuming you have mongoDB installed locally # and a database called 'twitterdb'
Next, set up the words that you want to ‘listen’ for on Twitter. You can use words or phrases seperated by commas.
WORDS = ['#bigdata', '#AI', '#datascience', '#machinelearning', '#ml', '#iot']
Here, I’m listening for words related to maching learning, data science, etc.
Next, let’s set up our Twitter API Access information. You can set these up here.
CONSUMER_KEY = "KEY" CONSUMER_SECRET = "SECRET" ACCESS_TOKEN = "TOKEN" ACCESS_TOKEN_SECRET = "TOKEN_SECRET"
Time to build the listener class.
class StreamListener(tweepy.StreamListener): #This is a class provided by tweepy to access the Twitter Streaming API. def on_connect(self): # Called initially to connect to the Streaming API print("You are now connected to the streaming API.") def on_error(self, status_code): # On error - if an error occurs, display the error / status code print('An Error has occured: ' + repr(status_code)) return False def on_data(self, data): #This is the meat of the script...it connects to your mongoDB and stores the tweet try: client = MongoClient(MONGO_HOST) # Use twitterdb database. If it doesn't exist, it will be created. db = client.twitterdb # Decode the JSON from Twitter datajson = json.loads(data) #grab the 'created_at' data from the Tweet to use for display created_at = datajson['created_at'] #print out a message to the screen that we have collected a tweet print("Tweet collected at " + str(created_at)) #insert the data into the mongoDB into a collection called twitter_search #if twitter_search doesn't exist, it will be created. db.twitter_search.insert(datajson) except Exception as e: print(e)
Now that we have the listener class, let’s set everything up to start listening.
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET) #Set up the listener. The 'wait_on_rate_limit=True' is needed to help with Twitter API rate limiting. listener = StreamListener(api=tweepy.API(wait_on_rate_limit=True)) streamer = tweepy.Stream(auth=auth, listener=listener) print("Tracking: " + str(WORDS)) streamer.filter(track=WORDS)
Now you are ready to go. The full script is below. You can store this script as “streaming_API.py” and run it as “python streaming_API.py” and – assuming you set up mongoDB and your twitter API key’s correctly, you should start collecting Tweets.
The Full Script:
from __future__ import print_function import tweepy import json from pymongo import MongoClient MONGO_HOST= 'mongodb://localhost/twitterdb' # assuming you have mongoDB installed locally # and a database called 'twitterdb' WORDS = ['#bigdata', '#AI', '#datascience', '#machinelearning', '#ml', '#iot'] CONSUMER_KEY = "KEY" CONSUMER_SECRET = "SECRET" ACCESS_TOKEN = "TOKEN" ACCESS_TOKEN_SECRET = "TOKEN_SECRET" class StreamListener(tweepy.StreamListener): #This is a class provided by tweepy to access the Twitter Streaming API. def on_connect(self): # Called initially to connect to the Streaming API print("You are now connected to the streaming API.") def on_error(self, status_code): # On error - if an error occurs, display the error / status code print('An Error has occured: ' + repr(status_code)) return False def on_data(self, data): #This is the meat of the script...it connects to your mongoDB and stores the tweet try: client = MongoClient(MONGO_HOST) # Use twitterdb database. If it doesn't exist, it will be created. db = client.twitterdb # Decode the JSON from Twitter datajson = json.loads(data) #grab the 'created_at' data from the Tweet to use for display created_at = datajson['created_at'] #print out a message to the screen that we have collected a tweet print("Tweet collected at " + str(created_at)) #insert the data into the mongoDB into a collection called twitter_search #if twitter_search doesn't exist, it will be created. db.twitter_search.insert(datajson) except Exception as e: print(e) auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET) auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET) #Set up the listener. The 'wait_on_rate_limit=True' is needed to help with Twitter API rate limiting. listener = StreamListener(api=tweepy.API(wait_on_rate_limit=True)) streamer = tweepy.Stream(auth=auth, listener=listener) print("Tracking: " + str(WORDS)) streamer.filter(track=WORDS)
The post Collecting / Storing Tweets with Python and MongoDB appeared first on Python Data.