Skip to content

Commit 0f35193

Browse files
committed
Update README.md
1 parent 4314fb5 commit 0f35193

File tree

1 file changed

+24
-24
lines changed

1 file changed

+24
-24
lines changed

README.md

Lines changed: 24 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,7 @@
11
[![license](https://img.shields.io/badge/license-MIT-success)](https://github.com/pdrm83/Sent2Vec/blob/master/LICENSE.md)
22
[![doc](https://img.shields.io/badge/docs-Medium-blue)](https://towardsdatascience.com/how-to-compute-sentence-similarity-using-bert-and-word2vec-ab0663a5d64)
33

4-
# Sent2Vec
5-
## How to Compute Sentence Embedding Fast and Flexible
4+
# Sent2Vec - How to Compute Sentence Embedding Fast and Flexible
65

76
In the past, we mostly encode text data using, for example, one-hot, term frequency, or TF-IDF (normalized term
87
frequency). There are many challenges to these techniques. In recent years, the latest advancements give us the
@@ -14,7 +13,7 @@ flexible sentence embedding library is needed to prototype fast and contextualiz
1413
package gives you the opportunity to do so. You currently have access to the standard encoders. More advanced
1514
techniques will be added in the later releases. Hope you can use this library in your exciting NLP projects.
1615

17-
## Install
16+
## 🔓 Install
1817
The `sent2vec` is developed to help you prototype faster. That is why it has many dependencies on other libraries. The
1918
module requires the following libraries:
2019

@@ -29,24 +28,27 @@ Then, it can be installed using pip:
2928
pip3 install sent2vec
3029
```
3130

32-
## Documentation
33-
34-
*class* **sent2vec.vectorizer.Vectorizer**(pretrained_weights='distilbert-base-uncased', ensemble_method='average')
31+
## 📚 Documentation
32+
```python
33+
class sent2vec.vectorizer.Vectorizer(pretrained_weights='distilbert-base-uncased', ensemble_method='average')
34+
```
3535

3636
### **Parameters**
3737

38-
- **pretrained_weights**: str, *default*='distilbert-base-uncased' - If the string does not include an extension .txt, .gz or .bin, then Bert vectorizer is loaded using the specified weights. *Example: pass 'distilbert-base-multilingual-cased' to load Bert base multilingual model.* <br/> To load word2vec vectorizer pass a valid path to the weights file (.txt, .gz or .bin). *Example: pass 'glove-wiki-gigaword-300.gz' to load the Wiki vectors (when saved in the same folder you are running the code).*
39-
- **ensemble_method**: str, *default*='average' - How word vectors are computed into sentece vectors.
38+
- `pretrained_weights`: str, *default*='distilbert-base-uncased' - How word embeddings are computed. => You can pass other BERT models into this parameter such as base multilingual model, i.e., `distilbert-base-multilingual-cased`. Basically, the vectorizer uses the BERT vectorizer with specified weights unless you pass a file path with extensions `.txt`, `.gz` or `.bin` to this parameter. In that case, the Gensim library will load the provided word2ved model (pretrained weights). For example, you can pass `glove-wiki-gigaword-300.gz` to load the Wiki vectors (when saved in the same folder you are running the code).
39+
- `ensemble_method`: str, *default*='average' - How word vectors are aggregated into sentece vectors.
4040

4141
### **Methods**
42-
42+
```python
4343
run(sentences, remove_stop_words = ['not'], add_stop_words = [])
44-
- **sentences**: list, - List of sentences.
45-
- **remove_stop_words**: list, *default*=['not'] - When using sent2vec, list of words to remove from *stop words* when splitting sentences.
46-
- **add_stop_words**: list, *default*=[] - When using sent2vec, list of words to add to *stop words* when splitting sentences.
44+
```
45+
- `sentences`: list, - List of sentences.
46+
- `remove_stop_words`: list, *default*=['not'] - When using sent2vec, list of words to remove from *stop words* when splitting sentences.
47+
- `add_stop_words`: list, *default*=[] - When using sent2vec, list of words to add to *stop words* when splitting sentences.
4748

48-
## Usage
49-
If you want to use the `BERT` language model (more specifically, `distilbert-base-uncased`) to encode sentences for
49+
## 🧰 Usage
50+
### 1. How to use BERT model?
51+
If you want to use the BERT language model (more specifically, `distilbert-base-uncased`) to encode sentences for
5052
downstream applications, you must use the code below.
5153
```python
5254
from sent2vec.vectorizer import Vectorizer
@@ -60,13 +62,6 @@ vectorizer = Vectorizer()
6062
vectorizer.run(sentences)
6163
vectors = vectorizer.vectors
6264
```
63-
Default Vectorizer weights are `distilbert-base-uncased` but it's possible to pass the argument `pretrained_weights` to chose another `BERT` model.
64-
65-
For example, to load `BERT base multilingual model`:
66-
67-
```python
68-
vectorizer = Vectorizer(pretrained_weights='distilbert-base-multilingual-cased')
69-
```
7065

7166
Now it's possible to compute distance among sentences by using their vectors. In the example, as expected, the distance between
7267
`vectors[0]` and `vectors[1]` is less than the distance between `vectors[0]` and `vectors[2]`.
@@ -80,10 +75,15 @@ print('dist_1: {0}, dist_2: {1}'.format(dist_1, dist_2))
8075
assert dist_1 < dist_2
8176
# dist_1: 0.043, dist_2: 0.192
8277
```
78+
Note: The default vectorizer for the BERT model is `distilbert-base-uncased` but it's possible to pass the argument `pretrained_weights` to chose another BERT model. For example, you can use the code below to load the base multilingual model.
8379

84-
If you want to use a word2vec approach instead, you must pass a valid path to the model weights. Under the hood the sentences will be splitted into lists of words using the `sent2words` method from the `Splitter` class. It is possible to customize the list of stop-words by adding or removing to/from the default list. Two additional arguments (both lists) must be passed when the vectorizer's method .run is called: `remove_stop_words` and `add_stop_words`.
80+
```python
81+
vectorizer = Vectorizer(pretrained_weights='distilbert-base-multilingual-cased')
82+
```
83+
### 2. How to use Word2Vec model?
84+
If you want to use a Word2Vec approach instead, you must pass a valid path to the model weights. Under the hood, the sentences will be split into lists of words using the `sent2words` method from the `Splitter` class. It is possible to customize the list of stop-words by adding or removing to/from the default list. Two additional arguments (both lists) must be passed when the vectorizer's method .run is called: `remove_stop_words` and `add_stop_words`.
8585

86-
NOTE: When you extract the most important words in sentences, by default `Vectorizer` computes the sentence embeddings using the average of vectors corresponding to the remaining words.
86+
NOTE: The default method to computes the sentence embeddings after extracting list of vectors is average of vectors corresponding to the remaining words.
8787

8888
```python
8989
from sent2vec.vectorizer import Vectorizer
@@ -98,4 +98,4 @@ vectorizer.run(sentences, remove_stop_words=['not'], add_stop_words=[])
9898
vectors = vectorizer.vectors
9999
```
100100

101-
And, that's pretty much it!
101+
And, that's pretty much it!

0 commit comments

Comments
 (0)