Chat
Search
Ithy Logo

Twitter Bot for Global Gas & LNG Market News

A Comprehensive Guide and Code Sample for Retweeting Summarized News Content

twitter bot code on computer desk

Highlights

  • Comprehensive Integration: Combines tweet searching, content extraction, and text summarization.
  • Engagement Optimization: Filters tweets by engagement metrics and automates safe retweeting.
  • Scalable Design: Designed for periodic execution with robust error handling and rate-limit management.

Overview

This guide provides a complete solution for building a Twitter bot that focuses on news related to the global gas and LNG markets. The bot searches for tweets that mention global gas, LNG markets, and related keywords, then processes these tweets to generate succinct summaries of the top content before retweeting those summaries. The approach leverages Python’s Tweepy library for interaction with the Twitter API, and incorporates text summarization techniques to express the content concisely. This guide also includes details on environment setup, code explanation, and how to run the bot periodically.

System Architecture & Workflow

Step-by-Step Functionality

The Twitter bot’s operation can be summarized in the following steps:

  1. Authentication: The bot uses the Twitter API via Tweepy to authenticate using developer credentials.
  2. Searching for Tweets: It queries Twitter for recent tweets containing relevant keywords such as "global gas market" and "LNG market", while applying filters like language and content type (e.g., excluding retweets and replies).
  3. Content Extraction and Summarization: For tweets that include a link to further reading, the bot retrieves the linked article content and processes it through a summarization function. If a tweet does not have associated content, it summarizes the content of the tweet itself.
  4. Engagement Filtering: Tweets are filtered based on minimum engagement metrics, such as retweets and likes, ensuring the bot only retweets quality content.
  5. Retweeting Summaries: Finally, the bot posts a new tweet containing the summary, along with proper attribution to the original tweet or the news source.

Detailed Components

Authentication & Setup

The authentication step involves reading Twitter developer credentials from environment variables (or a configuration file) and initializing a Tweepy API client. This ensures secure and controlled access to Twitter’s endpoints.

Tweet Searching & Filtering

The bot employs several search parameters to ensure only relevant tweets are fetched:
– Keywords: A list of terms such as "global gas market", "LNG market", and similar phrases.
– Language: Filtering by language ensures only English tweets are processed.
– Filters: Excluding retweets and replies avoids redundancy.
– Engagement Metrics: Using minimum thresholds for retweets and likes to secure the best content.

Content Summarization

This critical function uses basic Natural Language Processing techniques—such as tokenization, frequency analysis, and sentence ranking—to generate concise summaries of longer articles linked in tweets. For improved results, more advanced methods (such as transformers from Hugging Face) can be integrated. The summary is formatted for brevity and adherence to the Twitter character limit.

Retweeting and Error Handling

The bot posts the retweet of the summarized tweet using proper Twitter API endpoints. Robust error handling is integrated to catch API-related errors (e.g., rate limits, connectivity issues), log them, and wait when necessary before continuing execution. This ensures that the bot can run continuously in production or a scheduled environment without significant disruption.

Sample Code Implementation

The following Python script provides a complete example for a Twitter bot. The code integrates authentication, tweet searching, article content extraction, rudimentary summarization, and retweeting. Make sure to install all required dependencies such as Tweepy, Requests, BeautifulSoup, and optionally Natural Language Toolkit (NLTK) for text processing.


# Import necessary libraries
import tweepy
import time
import os
import re
import requests
from bs4 import BeautifulSoup
from datetime import datetime, timedelta
import logging

# If you plan to use advanced summarization uncomment and install the transformers library
# from transformers import pipeline

# Setup logging for debugging and tracing
logging.basicConfig(level=logging.INFO,
                    format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger()

# Load environment variables (ensure you have a .env file or set these variables in your environment)
CONSUMER_KEY = os.getenv('CONSUMER_KEY')  # Your Twitter API consumer key
CONSUMER_SECRET = os.getenv('CONSUMER_SECRET')  # Your Twitter API consumer secret
ACCESS_TOKEN = os.getenv('ACCESS_TOKEN')  # Your Twitter API access token
ACCESS_TOKEN_SECRET = os.getenv('ACCESS_TOKEN_SECRET')  # Your Twitter API access token secret

# Authenticate with the Twitter API using Tweepy
def authenticate_twitter():
    try:
        auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
        auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
        api = tweepy.API(auth, wait_on_rate_limit=True)
        api.verify_credentials()
        logger.info("Twitter authentication successful.")
        return api
    except Exception as e:
        logger.error("Error during Twitter authentication: %s", e)
        return None

# Function to extract article content from a URL
def get_article_content(url):
    try:
        response = requests.get(url)
        # Parse HTML content using BeautifulSoup
        soup = BeautifulSoup(response.text, 'html.parser')
        paragraphs = soup.find_all('p')
        content = ' '.join([para.get_text() for para in paragraphs])
        return content.strip()
    except Exception as e:
        logger.error("Error fetching content from %s: %s", url, e)
        return ""

# Basic text summarization function using sentence ranking
def summarize_text(text, word_limit=50):
    import nltk
    from nltk.tokenize import sent_tokenize, word_tokenize
    from nltk.corpus import stopwords
    nltk.download('punkt', quiet=True)
    nltk.download('stopwords', quiet=True)

    stop_words = set(stopwords.words('english'))
    sentences = sent_tokenize(text)
    
    # Create a word frequency table
    freq_table = {}
    words = word_tokenize(text.lower())
    for word in words:
        if word.isalpha() and word not in stop_words:
            freq_table[word] = freq_table.get(word, 0) + 1

    # Score each sentence based on word frequencies
    sentence_scores = {}
    for sentence in sentences:
        for word in word_tokenize(sentence.lower()):
            if word in freq_table:
                sentence_scores[sentence] = sentence_scores.get(sentence, 0) + freq_table[word]

    # Sort sentences based on score
    ranked_sentences = sorted(sentence_scores, key=sentence_scores.get, reverse=True)
    summary = ' '.join(ranked_sentences[:3])
    # Respect the word limit
    words_summary = summary.split()
    if len(words_summary) > word_limit:
        summary = ' '.join(words_summary[:word_limit]) + "..."
    return summary

# Create a function for processing individual tweets
def process_tweet(api, tweet):
    try:
        # Extract URLs from the tweet text if available
        pattern = r'https?://[^\s]+'
        urls = re.findall(pattern, tweet.full_text)
        summary = ""
        if urls:
            article_content = get_article_content(urls[0])
            if article_content:
                summary = summarize_text(article_content, word_limit=50)
                summary_text = f"Summary ({urls[0]}): {summary}"
            else:
                summary_text = f"Could not extract article content from {urls[0]}"
        else:
            # If no URL, then summarize the tweet content itself
            summary = summarize_text(tweet.full_text, word_limit=40)
            summary_text = f"Tweet Summary: {summary}"
        # Post the summary tweet as a new tweet
        api.update_status(summary_text)
        logger.info("Posted summary for tweet ID: %s", tweet.id)
    except Exception as e:
        logger.error("Error processing tweet %s: %s", tweet.id, e)

# Function to search and process tweets relevant to global gas and LNG markets
def search_and_process(api):
    query = '"global gas market" OR "LNG market" -filter:retweets -filter:replies'
    # Only look for tweets in English from the last 3 days
    since_date = (datetime.now() - timedelta(days=3)).strftime("%Y-%m-%d")
    full_query = f"{query} lang:en since:{since_date}"
    try:
        tweets = api.search_tweets(q=full_query, tweet_mode='extended', count=20)
        logger.info("Found %s tweets matching the query.", len(tweets))
        for tweet in tweets:
            # Optionally check for engagement; skip if below threshold here
            if tweet.retweet_count >= 5 and tweet.favorite_count >= 10:
                process_tweet(api, tweet)
                # Sleep to obey rate limits and avoid triggering Twitter spam filters
                time.sleep(60)
    except Exception as e:
        logger.error("Error during tweet search: %s", e)

# Main function execution
def main():
    api = authenticate_twitter()
    if api is not None:
        search_and_process(api)
    else:
        logger.error("Exiting due to authentication failure.")

if __name__ == "__main__":
    main()
  

Additional Enhancements and Considerations

You can further improve this bot in many ways. Below is a table summarizing potential enhancements:

Enhancement Description Benefit
Advanced Summarization Integrate transformer-based models via Hugging Face for better summaries. Produces more coherent and informative summaries.
Database Logging Store processed tweet IDs in a database instead of a text file. Efficient tracking and avoids duplicate processing.
Error Handling Implement retries and advanced error handling for API rate limits. Increased reliability and continuous operation.
Scheduled Execution Use schedulers like cron or cloud functions to run the bot periodically. Ensures timely and regular updates without manual intervention.

Automation & Deployment

Scheduling with Cron or Cloud Functions

To enable continuous operation, consider scheduling the bot to run at regular intervals. For example, if you host the code on a Linux server, you can add a cron job entry:


# Example cron job that runs the script every hour
0 * * * * /usr/bin/python3 /path/to/your/twitter_bot.py
  

Alternatively, you could deploy the bot using cloud-based solutions like Google Cloud Functions or AWS Lambda alongside event schedulers such as Cloud Scheduler to trigger execution.

Advanced API Integration

As Twitter continues to update its API endpoints, always ensure your code is compatible with the latest documentation and adheres to the best practices published by Twitter. This will help prevent disruptions due to API updates or changes in rate limits.

Conclusion

In this guide, we presented a detailed solution for building a Twitter bot that focuses on the global gas and LNG market using Python. The solution integrates critical components such as tweet search, content extraction, text summarization, and safe retweeting, all while handling possible errors and conforming to Twitter API requirements. This approach not only streamlines the process of keeping up with global market news but also leverages automation to ensure that relevant content is summarized and disseminated efficiently. With the provided code and potential enhancements, you can further tailor this bot to suit your specific needs and operational environment.

References

Recommended


Last updated February 25, 2025
Ask Ithy AI
Export Article
Delete Article