Topic modeling uncovers hidden themes in large document collections. Traditional methods like Latent Dirichlet Allocation rely on word frequency and treat text as bags of words, often missing deeper context and meaning.
BERTopic takes a different route, combining transformer embeddings, clustering, and c-TF-IDF to capture semantic relationships between documents. It produces more meaningful, context-aware topics suited for real-world data. In this article, we break down how BERTopic works and how you can apply it step by step.
What is BERTopic?
BERTopic is a modular topic modeling framework that treats topic discovery as a pipeline of independent but connected steps. It integrates deep learning and classical natural language processing techniques to produce coherent and interpretable topics.
The core idea is to transform documents into semantic embeddings, cluster them based on similarity, and then extract representative words for each cluster. This approach allows BERTopic to capture both meaning and structure within text data.
At a high level, BERTopic follows this process:
Each component of this pipeline can be modified or replaced, making BERTopic highly flexible for different applications.
Key Components of the BERTopic Pipeline
1. Preprocessing
The first step involves preparing raw text data. Unlike traditional NLP pipelines, BERTopic does not require heavy preprocessing. Minimal cleaning, such as lowercasing, removing extra spaces, and filtering very short documents is usually sufficient.
2. Document Embeddings
Each document is converted into a dense vector using transformer-based models such as SentenceTransformers. This allows the model to capture semantic relationships between documents.
Mathematically:
Where di is a document and vi is its vector representation.
3. Dimensionality Reduction
High-dimensional embeddings are difficult to cluster effectively. BERTopic uses UMAP to reduce the dimensionality while preserving the structure of the data.
This step improves clustering performance and computational efficiency.
4. Clustering
After dimensionality reduction, clustering is performed using HDBSCAN. This algorithm groups similar documents into clusters and identifies outliers.
Where zi is the assigned topic label. Documents labeled as −1 are considered outliers.
5. c-TF-IDF Topic Representation
Once clusters are formed, BERTopic generates topic representations using c-TF-IDF.
Term Frequency:
Inverse Class Frequency:
Final c-TF-IDF:
This method highlights words that are distinctive within a cluster while reducing the importance of common words across clusters.
Hands-On Implementation
This section demonstrates a simple implementation of BERTopic using a very small dataset. The goal here is not to build a production-scale topic model, but to understand how BERTopic works step by step. In this example, we preprocess the text, configure UMAP and HDBSCAN, train the BERTopic model, and inspect the generated topics.
Step 1: Import Libraries and Prepare the Dataset
import re
import umap
import hdbscan
from bertopic import BERTopic
docs = [
"NASA launched a satellite",
"Philosophy and religion are related",
"Space exploration is growing"
]
In this first step, the required libraries are imported. The re module is used for basic text preprocessing, while umap and hdbscan are used for dimensionality reduction and clustering. BERTopic is the main library that combines these components into a topic modeling pipeline.
A small list of sample documents is also created. These documents belong to different themes, such as space and philosophy, which makes them useful for demonstrating how BERTopic attempts to separate text into different topics.
Step 2: Preprocess the Text
def preprocess(text):
text = text.lower()
text = re.sub(r"\s+", " ", text)
return text.strip()
docs = [preprocess(doc) for doc in docs]
This step performs basic text cleaning. Each document is converted to lowercase so that words like “NASA” and “nasa” are treated as the same token. Extra spaces are also removed to standardize the formatting.
Preprocessing is important because it reduces noise in the input. Although BERTopic uses transformer embeddings that are less dependent on heavy text cleaning, simple normalization still improves consistency and makes the input cleaner for downstream processing.
Step 3: Configure UMAP
umap_model = umap.UMAP(
n_neighbors=2,
n_components=2,
min_dist=0.0,
metric="cosine",
random_state=42,
init="random"
)
UMAP is used here to reduce the dimensionality of the document embeddings before clustering. Since embeddings are usually high-dimensional, clustering them directly is often difficult. UMAP helps by projecting them into a lower-dimensional space while preserving their semantic relationships.
The parameter init=”random” is especially important in this example because the dataset is extremely small. With only three documents, UMAP’s default spectral initialization may fail, so random initialization is used to avoid that error. The settings n_neighbors=2 and n_components=2 are chosen to suit this tiny dataset.
Step 4: Configure HDBSCAN
hdbscan_model = hdbscan.HDBSCAN(
min_cluster_size=2,
metric="euclidean",
cluster_selection_method="eom",
prediction_data=True
)
HDBSCAN is the clustering algorithm used by BERTopic. Its role is to group similar documents together after dimensionality reduction. Unlike methods such as K-Means, HDBSCAN does not require the number of clusters to be specified in advance.
Here, min_cluster_size=2 means that at least two documents are needed to form a cluster. This is appropriate for such a small example. The prediction_data=True argument allows the model to retain information useful for later inference and probability estimation.
Step 5: Create the BERTopic Model
topic_model = BERTopic(
umap_model=umap_model,
hdbscan_model=hdbscan_model,
calculate_probabilities=True,
verbose=True
)
In this step, the BERTopic model is created by passing the custom UMAP and HDBSCAN configurations. This shows one of BERTopic’s strengths: it is modular, so individual components can be customized according to the dataset and use case.
The option calculate_probabilities=True enables the model to estimate topic probabilities for each document. The verbose=True option is useful during experimentation because it displays progress and internal processing steps while the model is running.
Step 6: Fit the BERTopic Model
topics, probs = topic_model.fit_transform(docs)
This is the main training step. BERTopic now performs the complete pipeline internally:
- It converts documents into embeddings
- It reduces the embedding dimensions using UMAP
- It clusters the reduced embeddings using HDBSCAN
- It extracts topic words using c-TF-IDF
The result is stored in two outputs:
- topics, which contains the assigned topic label for each document
- probs, which contains the probability distribution or confidence values for the assignments
This is the point where the raw documents are transformed into topic-based structure.
Step 7: View Topic Assignments and Topic Information
print("Topics:", topics)
print(topic_model.get_topic_info())
for topic_id in sorted(set(topics)):
if topic_id != -1:
print(f"\nTopic {topic_id}:")
print(topic_model.get_topic(topic_id))
This final step is used to inspect the model’s output.
print("Topics:", topics)shows the topic label assigned to each document.get_topic_info()displays a summary table of all topics, including topic IDs and the number of documents in each topic.get_topic(topic_id)returns the top representative words for a given topic.
The condition if topic_id != -1 excludes outliers. In BERTopic, a topic label of -1 means that the document was not confidently assigned to any cluster. This is a normal behavior in density-based clustering and helps avoid forcing unrelated documents into incorrect topics.
Advantages of BERTopic
Here are the main advantages of using BERTopic:
- Captures semantic meaning using embeddings
BERTopic uses transformer-based embeddings to understand the context of text rather than just word frequency. This allows it to group documents with similar meanings even if they use different words. - Automatically determines number of topics
Using HDBSCAN, BERTopic does not require a predefined number of topics. It discovers the natural structure of the data, making it suitable for unknown or evolving datasets. - Handles noise and outliers effectively
Documents that do not clearly belong to any cluster are labeled as outliers instead of being forced into incorrect topics. This improves the overall quality and clarity of the topics. - Produces interpretable topic representations
With c-TF-IDF, BERTopic extracts keywords that clearly represent each topic. These words are distinctive and easy to understand, making interpretation straightforward. - Highly modular and customizable
Each part of the pipeline can be adjusted or replaced, such as embeddings, clustering, or vectorization. This flexibility allows it to adapt to different datasets and use cases.
Conclusion
BERTopic represents a significant advancement in topic modeling by combining semantic embeddings, dimensionality reduction, clustering, and class-based TF-IDF. This hybrid approach allows it to produce meaningful and interpretable topics that align more closely with human understanding.
Rather than relying solely on word frequency, BERTopic leverages the structure of semantic space to identify patterns in text data. Its modular design also makes it adaptable to a wide range of applications, from analyzing customer feedback to organizing research documents.
In practice, the effectiveness of BERTopic depends on careful selection of embeddings, tuning of clustering parameters, and thoughtful evaluation of results. When applied correctly, it provides a powerful and practical solution for modern topic modeling tasks.
Frequently Asked Questions
A. It uses semantic embeddings instead of word frequency, allowing it to capture context and meaning more effectively.
A. It uses HDBSCAN clustering, which automatically discovers the natural number of topics without predefined input.
A. It is computationally expensive due to embedding generation, especially for large datasets.
Login to continue reading and enjoy expert-curated content.
