Detecting Offensive Content With Emotion Analysis Using Hume AI
In the today’s age user posting content on internet through social media majorly. Understanding and moderating content has become more critical than ever. Detecting offensive content or hate speech is a challenging problem that involves recognizing subtle and explicit emotional cues.
In this blog post, we’ll explore how to use the Hume AI library to detect offensive content based on emotion scores.
Core Concept
We leverage the Hume AI Expression Measurement API to analyze text for various emotions. We are choosing emotions which are typically associated in offensive content like anger, disgust, contempt, and annoyance. We take the emotion scores to classify text as offensive or not.
The API provides a rich set of emotional scores for text inputs. Each score corresponds to an emotion (e.g. anger, joy, admiration) and its intensity.
We declare a threshold which determines whether the aggregated score of specific emotions crosses the boundary into offensiveness.
Installing the Hume AI SDK
Before proceeding, install the Hume AI SDK by running in your python terminal:
pip install hume
This SDK simplifies the process of accessing Hume AI’s powerful emotion analysis APIs.
Importing the necessary libraries
We are importing necessary libraries of Hume AI and its supporting modules for emotion measurement.
import asyncio
from hume import AsyncHumeClient
from hume.expression_measurement.stream import Config
from hume.expression_measurement.stream.socket_client import StreamConnectOptions
from hume.expression_measurement.stream.types import StreamLanguage
Inside the Function
The is_offensive() function determines if a given text is offensive by analyzing the emotional scores:
def is_offensive(emotion_scores):
offensive_emotions = {"Anger", "Disgust", "Contempt", "Annoyance"}
threshold = 0.08 # Configurable threshold
offensive_score = sum(
score.score for score in emotion_scores if score.name in offensive_emotions
)
return offensive_score > threshold
- The offensive_emotions() declares set of emotions strongly associated with offensive content.
- We adjusting the threshold value based on the desired sensitivity and comparing it against the aggreagated score of emotions in offensive_score().
Asynchronous Processing with Hume AI
Using the AsyncHumeClient, we analyze text samples asynchronously. This enables efficient handling of multiple inputs in real-time applications:
async def main():
samples = [
"Mary had a little lamb, Its fleece was white as snow. Everywhere the child went, The little lamb was sure to go.",
"You idiot! you should shut your mouth and shouldn't complain about cleaning up your house."
]
client = AsyncHumeClient(api_key="YOUR_API_KEY")
model_config = Config(language=StreamLanguage())
stream_options = StreamConnectOptions(config=model_config)
async with client.expression_measurement.stream.connect(options=stream_options) as socket:
for sample in samples:
result = await socket.send_text(sample)
emotions = result.language.predictions[0].emotions
is_offensive_text = is_offensive(emotions)
print(f"Text: {sample}\nOffensive: {is_offensive_text}\n")
Replace YOUR_API_KEY
with an actual API key from the Hume AI platform. You can obtain an API key by signing up on Hume AI’s website, navigating to the API section, and creating a new key.
Let’s analyze two sample inputs:
- Input:
Mary had a little lamb, Its fleece was white as snow. Everywhere the child went, The little lamb was sure to go.
Output:
Offensive: False
The cumulative offensive score is 0.079, exceeding the threshold of 0.08, indicating that the text is offensive.
2. Input:
You idiot! you should shut your mouth and shouldn't complain about cleaning up your house.
Output:
Offensive: True
The cumulative offensive score is 0.114 which is low, as the emotions associated with offensiveness are minimal.
Real Life Applications
This method can be integrated into various systems, including:
- Content Moderation Platforms: Automatically filter or flag harmful content for review.
- Social Media Analysis: Gauge community sentiment and prevent toxicity.
- Customer Support Systems: Detect and respond to offensive customer messages.
Conclusion
Emotion-based offensiveness detection is a powerful tool for handling unstructured text. While this example uses Hume AI, the methodology can be adapted to other APIs or custom models. By tailoring the threshold and emotion weights, you can build a robust solution that aligns with your specific application needs.
Next Steps
- Experiment with different thresholds to fine-tune sensitivity.
- Expand the set of offensive emotions based on context.
- Explore Hume AI’s other capabilities, such as emotion recognition in images, audios and videos as well.
You can try running the script by yourself. Download the code from here.
✨ Thanks for reading!
👐 Connect with me on:
LinkedIn: https://www.linkedin.com/in/sharvari2706/
GitHub: https://github.com/sharur7
Email: sharuraut7official@gmail.com