from transformers import pipelineSentiment Analysis with DistilBERT
This notebook demonstrates how to perform sentiment analysis using Hugging Face’s pipeline API with a pre-trained DistilBERT model.
Importing Pipeline
Import the pipeline function from transformers to create a ready-to-use sentiment analysis model.
Creating Sentiment Analysis Pipeline
Create a sentiment analysis pipeline using DistilBERT, a smaller and faster version of BERT that’s been fine-tuned on the Stanford Sentiment Treebank dataset.
sentiment_analyzer = pipeline('sentiment-analysis', model='distilbert-base-uncased-finetuned-sst-2-english')Device set to use cuda:0
texts = [
'I love to play and watch cricket',
'I hate when virat kohli misses a century'
]
results = sentiment_analyzer(texts)
for res in results:
print(res){'label': 'POSITIVE', 'score': 0.9997660517692566}
{'label': 'NEGATIVE', 'score': 0.9990317821502686}
Understanding the Results
The model returns: - LABEL: Either ‘POSITIVE’ or ‘NEGATIVE’ - Score: Confidence level (0-1) for the prediction
Higher scores indicate more confidence in the classification.
Best Practices
- Test with various text types to understand model behavior
- Consider domain-specific fine-tuning for specialized use cases
- Be aware that models may have biases based on training data
Summary
This notebook demonstrated: - Using Hugging Face pipeline for sentiment analysis - Working with the DistilBERT model - Analyzing multiple texts at once - Interpreting confidence scores
The pipeline API makes sentiment analysis accessible with just a few lines of code!
Testing with Sample Texts
Test the sentiment analyzer with cricket-related texts to see how it classifies positive and negative sentiments.