Twitter Robots on a Raspberry Pi

Or how to get very quickly write-restricted by Twitter.

This is a short guide to playing around with the Twitter API using Python on a Raspberry Pi (or any other Linux machine).

Overview

The process has four general steps: –

  1. Setup a new Twitter account and create a new Twitter app;
  2. Setup the Raspberry Pi to access Twitter;
  3. Write code to return tweets associated with given search terms; and
  4. Write code to post tweets based on the given search terms.

Step 1 – Create New Twitter Account and App

First it helps to have an email alias to avoid spam in your main email account. I found the site www.33mail.com, which offers you multiple email address for free in the form [X]@[your-username].33mail.com.

Next register a new Twitter account using the email alias. To do this simply log out of any active Twitter accounts then go to www.twitter.com and sign up for a new account. It’s pretty quick and straight forward. Skip all the ‘suggested follows’ rubbish.

I used public domain images from WikiMedia Commons (this British Library collection on Flickr is also great) for the profile and added a relevant bio, location and birthday for my Robot.

Once the new Twitter account is set up log in. Then go to http://dev.twitter.com and create a new application. Once the new application is created you can view the associated consumer key and secret (under the ‘Access Keys’ tab). You can also request an access token and secret on the same page.

Some points to note:

  • You need to register a phone number with your new Twitter account before it will allow you to create a new application. One phone number can be linked with up to 10 Twitter accounts. Beware that SMS notifications etc will be sent to the most recently added account – this was fine by me I typically avoid being pestered by SMS.
  • You are asked to enter a website for your application. However, this can just be a placeholder. I used “http://www.placeholder.com”.

Step 2 – Setup Raspberry Pi

I have a headless Raspberry Pi I access via SSH with my iPad. Any old Linux machine will do though.

To configure the computer do the following:

  • Create a GitHub repository (mine is here).
  • SSH into the computer.
  • Clone the remote repository  (e.g. ‘git clone [respository link]’). I find it easier to use SSH for communication with the GitHub servers easier (see this page for how to Setup SSH keys).
  • CD into the newly generated folder (e.g. ‘cd social-media-bot’).
  • Initialise a new virtual environment using virtualenv and virtualenvwrapper. I found this blogpost very helpful to do this. (Once you have installed those two tools via ‘pip’ use ‘mkvirtualenv social-media-bot’ to setup then ‘workon social-media-bot’ to work within the initialise virtual env. For other commands (I haven’t used any yet) see here.)
  • Install Twitter tools and other required libraries. This was as simple as typing ‘pip install twitter’ (within the ‘social-media-bot’ virtualenv).

Step 3 – Write Search Code

As with previous posts I decided to use ConfigParser (or configparser with Python 3+) to hide specific secrets from GitHub uploads.

My Python script thus uses a settings.cfg file structured as follows:
—-
[twitter_settings]
ACCESS_TOKEN = [Your token here]
ACCESS_SECRET = [Your secret here]
CONSUMER_KEY = [Your key here]
CONSUMER_SECRET = [Your secret here]

[query_settings]
query_term = [Your query term here]
last_tweet_id = 0

[response_settings]
responses =
 ‘String phrase 1.’;
 ‘String phrase 2.’;
 ‘String phrase 3.’
—-

Create this file in the directory with the Python code. 

  • The first section (‘twitter_settings’) stores the Twitter app access keys that you copy and paste from the ‘Access Keys’ tab of the Twitter developer webpage. 
  • The second section (‘query_settings’) stores the query term (e.g. ‘patent’) and a variable that keeps track of the highest tweet ID returned by the last search.
  • The third section (‘response_settings’) contains string phrases that I can randomly select for automated posts.

The Python code for accessing Twitter is called twitter_bot.py. Have a look on GitHub – https://github.com/benhoyle/social-media-bot.

The comments should helpfully make the code self-explanatory. Authentication, which is often fairly tricky, is a doddle with the Python ‘Twitter’ library – simply create a new ‘oauth’ object using the access keys loaded from the settings.cfg and use this to initiate the connection to the Twitter API:

settings_dict = dict(parser.items('twitter_settings'))
oauth = OAuth(settings_dict['access_token'], settings_dict['access_secret'], settings_dict['consumer_key'], settings_dict['consumer_secret'])

# Initiate the connection to Twitter REST API
twitter = Twitter(auth=oauth)

The script then searches for tweets containing the query term. 


tweets = twitter.search.tweets(q=expanded_query_term, lang='en', result_type='recent', count='10', since_id=last_tweet_id)['statuses']

Points to note:

  • There is a lot of ‘noise’ in the form of retweets and replies on Twitter. I wanted to look for original, stand-alone tweets. To filter out retweets add ” -RT” to the query string. To filter out replies use ” -filter:replies” (this isn’t part of the documented API but appeared to work).
  • I found that search terms often meant something else in languages other than English. Using ‘lang=’en” limited the search to English language posts.
  • The parameters for the API function map directly onto the API parameters as found here: https://dev.twitter.com/rest/reference/get/search/tweets.
  • The ‘since_id’ parameter searches from a given tweet ID. The code saves the highest tweet ID from each search so that only new results are found.

Step 4 – Posting Replies

A reply is then posted from the account associated with the token, key and secrets. The reply randomly selects one of the string phrases in the responses section. The reply is posted as a reply to tweets that contain the query term.


# Extract tweetID, username and text of tweets returned from search

tweets = [{

   'id_str': tweet['id_str'],

   'screen_name': tweet['user']['screen_name'],

   'original_text': tweet['text'], 

   'response_text': '@' + tweet['user']['screen_name'] + ' ' + random.choice(responses)

   } for tweet in tweets if query_term in tweet['text'].lower()]

#Posting on twitter

for tweet in tweets:

#Leave a random pause (between 55 and 75s long) between posts to avoid rate limits

 twitter.statuses.update(status=tweet['response_text'], in_reply_to_status_id=tweet['id_str'])

 #print tweet['original_text'], '\n', tweet['response_text'],  tweet['id_str']

 time.sleep(random.randint(75,120))

There is finally a little bit of code to extract the maximum tweetID from the search results and save it in the settings.cfg file.


#Don't forget running list of IDs so you don't post twice - maybe use since_id to do this simply - record latest id return by search and start next search from this

id_ints = [int(t['id_str']) for t in tweets]

# Add highest tweetID to settings.cfg file

parser.set('query_settings', 'last_tweet_id', str(max(id_ints)))

# Write updated settings.cfg file

with open('settings.cfg', 'wb') as configfile:

    parser.write(configfile)

I have the script scheduled as a cron job that runs every 20 minutes. I had read that the rate limits were around 1 tweet per minute so you will see above that I leave a random gap of around a minute between each reply. I had to hard-code the path to the settings.cfg file to get this cron job working – you may need to modify for your own path.

I also found that it wasn’t necessarily clear cut as to how to run a cron command within a virtual environment. After a bit of googling I found a neat little trick to get this to work: use the Python executable in the ‘.virtualenvs’ ‘bin’ folder for the project. Hence the command to add to the crontab was:


~/.virtualenvs/social-media-bot/bin/python ~/social-media-bot/twitter_bot.py

Result

This all worked rather nicely. For about an hour. Then Twitter automatically write-restricted my application. 

A bit of googling took me to this article: – https://support.twitter.com/articles/76915. It appears you are only allowed to post replies to users if you are a large multi-national airline. Nevermind. 

Maybe automated tweet ‘quoting’, favouriting or just posting would work better for next time. Still it was an enjoyable play-around with the dynamics of the Twitter API. It should be easy to incorporate tweets into future projects.

Advertisements

First Steps into the Quantified Self: Getting to Know the Fitbit API

Today I have been experimenting with the Fitbit API.

fitbit Logo

Fitbit Logo

Here are some rather amateurish steps for getting data from Fitbit’s “cloud”.

Simple Client Setup for Public Data

1. Get and setup a Fitbit product.

I have access to a set of Fitbit scales (the “Aria”). The Fitbit Force looks quite good; I may get one of those when they are launched in the UK to record steps.

2. Register an “App”

Goto https://dev.fitbit.com. Login using your Fitbit password from 1). Click the link to add an “app”.

3. Setup Libraries

Install oauth2:

sudo pip install oauth2

Clone python-fitbit:

mkdir Healthcare
cd Healthcare
git clone https://github.com/orcasgit/python-fitbit.git

4. Record Consumer Key & Secret

After registering your “app” in 2) above, you have access to a “Consumer Key” and a “Consumer Secret”. You can access these details by going to “Manage My Apps”. I then store these in a “config.ini” file I keep in my working project directory. For example:

[Login Parameters]
C_KEY=*key_string*
C_SECRET=*secret_string*

5. Access Public Data

With the client variables we can access public data on Fitbit.

import fitbit
import ConfigParser

#Load Settings
parser = ConfigParser.SafeConfigParser()
parser.read('config.ini')
consumer_key = parser.get('Login Parameters', 'C_KEY')
consumer_secret = parser.get('Login Parameters', 'C_SECRET')

#Setup an unauthorised client (e.g. with no user)
unauth_client = fitbit.Fitbit(consumer_key, consumer_secret)

#Get data for a user
user_params = unauth_client.user_profile_get(user_id='1ABCDE')

You can get your user ID by accessing your Fitbit profile page. It is displayed in a URL just above any photo you may have. The last command returns a JSON string similar to this one:

{u'user': {u'city': u'', u'strideLengthWalking': 0, u'displayName': u'USERNAME', u'weight': 121.5, u'country': u'', u'aboutMe': u'', u'strideLengthRunning': 0, u'height': 0, u'timezone': u'UTC', u'dateOfBirth': u'', u'state': u'', u'encodedId': u'1ABCDE', u'avatar': u'https://pic_url.jpg', u'gender': u'NA', u'offsetFromUTCMillis': 0, u'fullName': u'', u'nickname': u'', u'avatar150': u'https://pic_url.jpg'}}

As I am only looking at getting a current weight reading (as taken from the last Aria scales measurement) I could stop here, as long as I have made my “body” data public (viewable by “Anyone” on the Fitbit profile > privacy page). However, as I wanted to get a BMI and body fat reading I continued with user authentication.

User Authenication

The python-fitbit library has a routine called gather_keys_cli.py. I followed this routine in iPython to get the user key and secret for my own Fitbit account.

1. Fetch Request Token

client = fitbit.FitbitOauthClient(consumer_key, consumer_secret)
token = client.fetch_request_token()
print 'FROM RESPONSE'
print 'key: %s' % str(token.key)
print 'secret: %s' % str(token.secret)
print 'callback confirmed? %s' % str(token.callback_confirmed)
print ''

2. Allow Access to User Fitbit Account

I first followed the steps:

print '* Authorize the request token in your browser'
print ''
print 'open: %s' % client.authorize_token_url(token)
print ''

I had registered my callback URL as http://localhost/callback. I had not actually implemented a request handler for this URL on my local machine. However, I learnt this was not a problem – the parameters I needed were included in the callback URL. Even though the page got a ‘404’ I could still see and extract the ‘verifier’ parameter from the URL in the browser.

3. Get User Key and Secret

First I stored the ‘verifier’ parameter from the URL as a string (verifier = “*parameterfromURL*”). Then I ran these steps to get the user key and secret:

print '* Obtain an access token ...'
print ''
print 'REQUEST (via headers)'
print ''
token = client.fetch_access_token(token, verifier)
print 'FROM RESPONSE'
print 'key: %s' % str(token.key)
print 'secret: %s' % str(token.secret)
print ''

4. Save User Key and Secret and Use in Request

The last step is to save the user key and secret in the “config.ini” file. I saved these as “U_KEY” and “U_SECRET” in a similar manner to the consumer key and secret. Hence, these variables could be retrieved by calling:

user_key = parser.get('Login Parameters', 'U_KEY')
user_secret = parser.get('Login Parameters', 'U_SECRET')

Finally, we can use both sets of keys and secrets to access the more detailed user data:

authd_client = fitbit.Fitbit(consumer_key, consumer_secret, user_key=user_key, user_secret=user_secret)

body_stats = authd_client._COLLECTION_RESOURCE('body')

Todo

Some further work includes:

  • Adapting the python-fitbit routines for the iHealth API.
  • Building a script that gets data and imports into a website datastore.
  • Using the body data to alter a SVG likeness.