Home

Sentence similarity Python

Sentence Similarity in Python using Doc2Vec. Introduction. Numeric representation of Text documents is challenging task in machine learning and there are different ways there to create the numerical features for texts such as vector representation using Bag of Words, Tf-IDF etc.I am not going in detail what are the advantages of one over the other. The key idea is that word embeddings could represent that the two sentences have similar meaning even though they use different words. This could be done quickly in a package such as Spacy in python. See https://spacy.io/usage/vectors-similarity Python | Measure similarity between two sentences using cosine similarity. Cosine similarity is a measure of similarity between two non-zero vectors of an inner product space that measures the cosine of the angle between them. Similarity = (A.B) / (||A||.||B||) where A and B are vectors One can use this Python 3 library to compute sentence similarity: https://github.com/UKPLab/sentence-transformers. Code example from https://www.sbert.net/docs/usage/semantic_textual_similarity.html

Question or problem about Python programming: According to the Gensim Word2Vec, I can use the word2vec model in gensim package to calculate the similarity between 2 words. e.g. trained_model.similarity('woman', 'man') 0.73723527 However, the word2vec model fails to predict the sentence similarity. I find out the LSI model with sentence similarity in gensim, but, which doesn't seem that can be combined with word2vec model. The length of corpus of each sentence I have is not very. Therefore, cosine similarity of the two sentences is 0.684 which is different from Jaccard Similarity of the exact same two sentences which was 0.5 (calculated above) The code for pairwise Cosine Similarity of strings in Python is: from collections import Counter from sklearn.feature_extraction.text import CountVectorize Measuring Similarity Between Texts in Python. This post demonstrates how to obtain an n by n matrix of pairwise semantic/cosine similarity among n text documents. Finding cosine similarity is a basic technique in text mining. My purpose of doing this is to operationalize common ground between actors in online political discussion (for. That's where the ladder comes in. It's the exact opposite, useless for typo detection, but great for a whole sentence, or document similarity calculation. After reading this article you'll be able to do the following: Determine which similarity algorithm is suitable for your situation; Implement it in Python; Okay, ready to dive in? Let's go

Sample Output. You can see that these two sentence pairs (I like Python because I can build AI applications, I like Python because I can do data analytics) and (The cat sits on the ground, The cat walks on the sidewalk) are relatively similar. Therefore, the outputted similarity score is also relatively higher # Python/NLTK implementation of algorithm to detect similarity between # short sentences described in the paper - Sentence Similarity based # on Semantic Nets and Corpus Statistics by Li, et al. # Results achieved are NOT identical to that reported in the paper, bu As you know, there is a no silver-bullet which can calculate perfect similarity between sentences. You should conduct various experiments with your dataset. Caution: TS-SS score might not fit with sentence similarity task, since this method originally devised to calculate the similarity between long documents. Result Sentence similarity is one of the clearest examples of how powerful highly-dimensional magic can be. The logic is this: Take a sentence, convert it into a vector. Take many other sentences, and convert them into vectors

Sentence Similarity in Python using Doc2Vec - kanok

Video: machine learning - Sentence meaning similarity in python

In text analysis applications, a common pipeline is adopted in using semantic similarity from concept level, to word and sentence level. For example, word similarity is first computed based on similarity scores of WordNet concepts, and sentence similarity is computed by composing word similarity scores. Finally, document similarity could be computed by identifying important sentences, e.g. We will use cosine similarity between pairs of vectors that represent sentences. The measure considers the angle between two vectors and commonly used in text analysis. Some good explanations of the chosen similarity measure can be found here, the paper not only provides clear definition it also discusses context based uses

Python | Word Similarity using spaCy. Word similarity is a number between 0 to 1 which tells us how close two words are, semantically. This is done by finding similarity between word vectors in the vector space. spaCy, one of the fastest NLP libraries widely used today, provides a simple method for this task. spaCy supports two methods to find. Python project for generating search results based on a given text using TF Universal Sentence Encoder and ElasticSearch database. Semantic Sentence Similarity using Word2Vec, Fasttext embedding and Cosine Similarity, Word Mover Distance. nlp word2vec fasttext sentence-similarity word-mover-distance sentence-embeddings Updated Nov 5, 2020; Python; vivekverma239 / Quora-Question-Pair Star 1. Star 1. Code Issues Pull requests. Python project for generating search results based on a given text using TF Universal Sentence Encoder and ElasticSearch database. python search elasticsearch semantic database tensorflow similarity cosine-similarity relevance sentence-similarity textsearch universal-sentence-encoder Python | Spilt a sentence into list of words. 14, Feb 19. Python | Word Similarity using spaCy. 17, Jul 19. Find the first repeated word in a string in Python using Dictionary. 01, Dec 17. Python | Program that matches a word containing 'g' followed by one or more e's using regex. 10, Jun 18 . Python | Count occurrences of each word in given text file (Using dictionary) 09, Jul 19. Article. # sentence_transformer: Python module using Sentence BERT # check documentation in source link for further details from sentence_transformers import SentenceTransformer from sentence_transformers.util import pytorch_cos_sim # cosine silarity # stsb-roberta-large is one of the possible pre-trained models model = SentenceTransformer('stsb-roberta-large') # list of sentences (type: str) sentences.

Python Measure similarity between two sentences using

  1. Having the texts as vectors and calculating the angle between them, it's possible to measure how close are those vectors, hence, how similar the texts are. An angle of zero means the text are exactly equal. As you remember from your high school classes, the cosine of zero is 1. The cosine of the angle between two vectors gives a similarity.
  2. Calculate the cosine similarity between the vector embeddings (code below). All the cosine similarity values I get are between 0 and 1. Why is that? Shouldn't I also have negative cos similarity values? The sentence embeddings have both positive and negative elements. num_docs = np.array(sentence_embedding).shape[0] cos_sim = np.zeros([num_docs, num_docs]) for ii in range(num_docs): for jj in.
  3. Calculate cosine similarity given 2 sentence strings (3) . A simple pure-Python implementation would be: import re, math from collections import Counter WORD = re.compile(r'\w+') def get_cosine(vec1, vec2): intersection = set(vec1.keys()) & set(vec2.keys()) numerator = sum([vec1[x] * vec2[x] for x in intersection]) sum1 = sum([vec1[x]**2 for x in vec1.keys()]) sum2 = sum([vec2[x]**2 for x in.
  4. Three sentences will be sent through the BERT model to retrieve its embeddings and use them to calculate the sentence similarity by: Using the [CLS] token of each sentence. Calculating the mean of sentence embeddings. Calculating the max-over-time of the sentence embeddings. The python libraries that we will use in this notebook are: transformers
  5. TensorFlow | NLP | Sentence similarity using TensorFlow cosine function. This code snippet is using TensorFlow2.0, some of the code might not be compatible with earlier versions, make sure to update TF2.0 before executing the code. tf.keras.losses.cosine_similarity function in tensorflow computes the cosine similarity between labels and.

python - Sentence similarity prediction - Data Science

  1. In this article, I will be covering the top 4 sentence embedding techniques with Python Code. Further, I limit the scope of this article to providing an overview of their architecture and how to implement these techniques in Python. We will be taking the basic use case of finding similar sentences given a sentence and demonstrate how to use such techniques for the same. I will begin with an.
  2. The sentence embeddings can then be trivially used to compute sentence level meaning similarity as well as to enable better performance on downstream classification tasks using less supervised training data. Setup. This section sets up the environment for access to the Universal Sentence Encoder on TF Hub and provides examples of applying the encoder to words, sentences, and paragraphs.
  3. Similarity between two strings is: 0.8421052631578947 Using Cosine similarity in Python. We'll construct a vector space from all the input sentences. The number of dimensions in this vector space will be the same as the number of unique words in all sentences combined. Then we'll calculate the angle among these vectors
  4. We'll do this with three example sentences and then compute their similarity. After that, we'll go over actual methods that can be used to compute the document vectors. Let's consider three sentences: We went to the pizza place and you ate no pizza at all; I ate pizza with you yesterday at home ; There's no place like home; To build our vectors, we'll count the occurrences of each.
  5. ds of the data science beginner. Who started to understand them for the very first time

Sentence Similarity in Python using Doc2Vec - kanoki trend kanoki.org. Sentence Similarity in Python using Doc2Vec. Posted on Mar 07, 2019 · 8 mins read Share this Introduction. Numeric representation of Text documents is challenging task in machine learning and there are different ways there to create the numerical features for texts such as. A2A. It depends on what data do you have. If you have labelled data — pairs of similar sentences labelled as +1 and pairs of dissimilar sentences labelled as -1 — then you can use some supervised learning algorithm. But I am assuming you do not ha.. If they have one, then counter, which named under 'cnt', will gain 1, and the component will be appended into the list variable 'common'. 2-4. To quantify similarity, we divide 'cnt' by length of the list 'a'. Also we return the common component list. To test this, I made up three sentences. c = 'A cat sat on a mat.' Sentence-BERT for spaCy. This package wraps sentence-transformers (also known as sentence-BERT) directly in spaCy.You can substitute the vectors provided in any spaCy model with vectors that have been tuned specifically for semantic similarity.. The models below are suggested for analysing sentence similarity, as the STS benchmark indicates If ratio_calc = True, the function computes the levenshtein distance ratio of similarity between two strings For all i and j, distance[i,j] will contain the Levenshtein distance between the first i characters of s and the first j characters of t # Initialize matrix of zeros rows = len(s)+1 cols = len(t)+1 distance = np.zeros((rows,cols),dtype = int) # Populate matrix of zeros with the.

Cosine Similarity tends to determine how similar two words or sentence are, It can be used for Sentiment Analysis, Text Comparison and being used by lot of popular packages out there like word2vec. So Cosine Similarity determines the dot product between the vectors of two documents/sentences to find the angle and cosine of that angle to derive the similarity Top level overview of text similarity. Our solution will consist of following components: NLP using some Python code to do text preprocessing of product's description. TensorFlow model from TensorFlow Hub to construct a vector for each product description. Comparing vectors will allow us to compare corresponding products for their similarity $ python -m nltk.downloader all. The words like 'no', 'not', etc are used in a negative sentence and useful in semantic similarity. So before removing these words observed the data and based on your application one can select and filter the stop words. Remove punctuatio

sklearn.metrics.pairwise.cosine_similarity¶ sklearn.metrics.pairwise.cosine_similarity (X, Y = None, dense_output = True) [source] ¶ Compute cosine similarity between samples in X and Y. Cosine similarity, or the cosine kernel, computes similarity as the normalized dot product of X and Y Python (3) Queue (4) Randomization (1) Recursion (12) Search (80) Simulation (82) Sliding Window (14) SP (16) SQL (3) Stack (19) String (129) Template (1) Tree (109) Trie (2) Two pointers (23) Uncategorized (19) ZOJ (3) 花花酱 LeetCode 734. Sentence Similarity. By zxi on November 28, 2017. Problem: Given two sentences words1, words2 (each represented as an array of strings), and a list of. Cosine similarity is a popular NLP method for approximating how similar two word/sentence vectors are. The intuition behind cosine similarity is relatively straight forward, we simply use the cosine of the angle between the two vectors to quantify how similar two documents are. From trigonometry we know that the Cos (0) = 1, Cos (90) = 0, and.

How to calculate the sentence similarity using word2vec

Let's implement it in our similarity algorithm. Open file and tokenize sentences. Create a .txt file and write 4-5 sentences in it. Include the file with the same directory of your Python program. Now, we are going to open this file with Python and split sentences. import nltk from nltk.tokenize import word_tokenize, sent_tokenize file_docs. In order to calculate sentence semantic similarity more accurately, a sentence semantic similarity calculating method based on segmented semantic comparison was proposed. Sentences would be. Learning unsupervised embeddings for textual similarity with transformers. In this article, we look at SimCSE, a sim ple c ontrastive s entence e mbedding framework, which can be used to produce superior sentence embeddings, from either unlabeled or labeled data. The idea behind the unsupervised SimCSE is to simply predicts the input sentence. Semantic similarity between sentences. Given two sentences, the measurement determines how similar the meaning of two sentences is. The higher the score, the more similar the meaning of the two sentences. Here are the steps for computing semantic similarity between two sentences: First, each sentence is partitioned into a list of tokens sentence-similarity. I plan to implement some models for sentence similarity found in the literature to reproduce and study them. They have a wide variety of application, including: Paraphrase Detection: Give two sentences, are the sentences paraphrases of each other? Semantic Texual Similarity: Given two sentences, how close are they in terms of semantic equivalence? Natural Language.

Hi guys, In this tutorial, we're going to learn how to Make a Plagiarism Detector in Python using machine learning techniques such as word2vec and cosine similarity in just a few lines of code.. Overview. Once finished our plagiarism detector will be capable of loading a student's assignment from files and then compute the similarity to determine if students copied each other [NLP] Use Python to Calculate the Minimum Edit Distance of Two Sentences Clay; 2021-07-22 2021-07-22; NLP, Python; When we processing natural language processing task, sometimes we may need to determine the similarity of two sentences. Of course, it is also possible that you want to determine the similarity between texts, not just sentences. There are many popular methods, such as using word. Sentence similarity is one of the clearest examples of how powerful highly-dimensional magic can be. The logic is this: Take a sentence, convert it into a vector. Take many other sentences, and convert them into vectors. Find sentences that have the smallest distance (Euclidean) or smallest angle (cosine similarity) between them - more on that here. We now have a measure of semantic similarity. in my code I want to make a nested dictionary from a sentence. in my dictionary I have a key which is a word of the sentence and it's value is a dictionary that tells me its similarity with other words of the sentence using fuzzy matching. the point is, if we consider my sentence like. my_text = Have you ever wanted and my output a

python - FileNotFound error downloading roberta-model

NLP with SpaCy Python Tutorial- Semantic SimilarityIn this tutorial we will be learning about semantic similarity with spacy.spaCy is able to compare two obj.. from inltk.inltk import get_sentence_similarity get_sentence_similarity (sentence1, sentence2, '<code-of-language>', cmp = cos_sim) // sentence1, sentence2 are strings in '<code-of-language>' // similarity of encodings is calculated by using cmp function whose default is cosine similarity Example: >> get_sentence_similarity ('मैं इन. Sentence similarity is computed as a linear combination of semantic similarity and word order similarity. Semantic Similarity is computed as the Cosine Similarity between the semantic vectors for the two sentences. To build the semantic vector, the union of words in the two sentences is treated as the vocabulary. If the word occurs in the sentence, its value for that position is 1. If it doesn.

Overview of Text Similarity Metrics in Python by Sanket

The main objective **Semantic Similarity** is to measure the distance between the semantic meanings of a pair of words, phrases, sentences, or documents. For example, the word car is more similar to bus than it is to cat. The two main approaches to measuring Semantic Similarity are knowledge-based approaches and corpus-based, distributional methods We can run a Python script from which we use the BERT service to encode our words into word embeddings. Given that, we just have to import the BERT-client library and create an instance of the client class. Once we do that, we can feed the list of words or sentences that we want to encode. from bert-serving.client import BertClient ( Suppose I have created a model and wish to find similarities with new sentences. I create new labels for each new sentence. I can do the model.train, but it does not update the vocabulary from the loaded model to include the words in the new sentences. If I do . model.load(orig.model) # this has 100000 labels. sentences = [] currentlines = 100000 for line in lines: currentlines += 1. Using the Universal Sentence Encoder module of tf.Hub. Building an approximate similarity matching index using Spotify's Annoy library. Serving the index for real-time semantic search in a web app. The code for the example solution is in the realtime-embeddings-matching GitHub repository. Approximate similarity matchin We will be using Universal Sentence Encoder for generating sentence embeddings. The universal encoder supports several downstream tasks and thus can be adopted for multi-task learning, i.e, the generated sentence embeddings can be used for multiple tasks like sentiment analysis, text classification, sentence similarity, etc.

Measuring Similarity Between Texts in Python - Loretta C

For the Sentence Similarity Based on Semantic Nets and Corpus Statistics paper, I found a couple of references behind the IEEE and ACM paywalls, but the original link to sinica.edu.tw seems to be dead. I guess you are out of luck unless you or your organization have access (I don't unfortunately) Summary: Sentence Similarity With Transformers and PyTorch. All we ever seem to talk about nowadays are BERT this, BERT that. I want to talk about something... Tagged with python, datascience, machinelearning, deeplearning. I want to talk about something else, but BERT is just too good - so this video will be about BERT for sentence similarity So for the sentence The cat sat on the mat, a 3-gram representation of this sentence would be The cat sat, cat sat on, sat on the, on the mat. The skip part refers to the number of times an input word is repeated in the data-set with different context words (more on this later). These grams are fed into the Word2Vec context prediction system. For instance.

Cosine Similarity - Understanding the math and how it works (with python codes) Cosine similarity is a metric used to measure how similar the documents are irrespective of their size. Mathematically, it measures the cosine of the angle between two vectors projected in a multi-dimensional space. The cosine similarity is advantageous because. Find the best information and most relevant links on all topics related toThis domain may be for sale Text similarity search with vector fields. From its beginnings as a recipe search engine, Elasticsearch was designed to provide fast and powerful full-text search. Given these roots, improving text search has been an important motivation for our ongoing work with vectors. In Elasticsearch 7.0, we introduced experimental field types for high.

Learn how to compute tf-idf weights and the cosine similarity score between two vectors. You will use these concepts to build a movie and a TED Talk recommender. Finally, you will also learn about word embeddings and using word vector representations, you will compute similarities between various Pink Floyd songs. This is the Summary of lecture Feature Engineering for NLP in Python, via. The following are 30 code examples for showing how to use networkx.pagerank().These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example Calculate the cosine similarity between the vector embeddings (code below). All the cosine similarity values I get are between 0 and 1. Why is that? Shouldn't I also have negative cos similarity values? The sentence embeddings have both positive and negative elements. num_docs = np.array(sentence_embedding).shape[0] cos_sim = np.zeros([num_docs, num_docs]) for ii in range(num_docs): for jj in.

Sentence similarity is one of the clearest examples of how powerful highly-dimensional magic can be. The logic is this: - Take a sentence, convert it into a vector. - Take many other sentences, and convert them into vectors Sentence Similarity is an open source software project. PyTorch implementations of various deep learning models for paraphrase detection, semantic similarity, and textual entailment. Open Source Libs. Find Open Source Packages. Open Source Libs Natural Language Processing Sentence Similarity. sentence-similarity. I plan to implement some models for sentence similarity found in the. sentences - sentence similarity python github . What are some good ways of estimating 'approximate' semantic similarity between sentences? (2) I have been looking at the nlp tag on SO for the past couple of hours and am confident I did not miss anything but if I did, please do point me to the question. In the mean time though, I will describe what I am trying to do. A common notion that I. keyword-filtered, Semantic Sentence Similarity Python notebook using data from multiple data sources · 1,460 views · 1y ago · deep learning, nlp. 1. Copied Notebook. This notebook is an exact copy of another notebook. Do you want to view the original author's notebook? Votes on non-original work can unfairly impact user rankings. Learn more about Kaggle's community guidelines. Upvote anyway.

Semantic Similarity Using Transformers | by Raymond Cheng

Cosine Similarity Between Two Vectors in Python. The following code shows how to calculate the Cosine Similarity between two arrays in Python: from numpy import dot from numpy. linalg import norm #define arrays a = [23, 34, 44, 45, 42, 27, 33, 34] b = [17, 18, 22, 26, 26, 29, 31, 30] #calculate Cosine Similarity cos_sim = dot (a, b)/(norm (a)* norm (b)) cos_sim 0.965195008357566 The Cosine. Levenshtein Distance and Text Similarity in Python. Frank Hofmann. Introduction. Writing text is a creative process that is based on thoughts and ideas which come to our mind. The way that the text is written reflects our personality and is also very much influenced by the mood we are in, the way we organize our thoughts, the topic itself and by the people we are addressing it to - our readers.

Python | Perform Sentence Segmentation Using Spacy

with reference to your post for word2vec CBOW model, I have created a model for finding sentence similarity (Sentence Relevancy) But still for some of the sentences my model is not showing proper sentence Relevancy let me briefly tell you I want to check whether any article or sentence written is useful for me or not based on trained model of my own dat Sentence Similarity Calculator. This repo contains various ways to calculate the similarity between source and target sentences. You can choose the pre-trained models you want to use such as ELMo, BERT and Universal Sentence Encoder (USE).. And you can also choose the method to be used to get the similarity:. 1 GitHub is where people build software. More than 65 million people use GitHub to discover, fork, and contribute to over 200 million projects 原题Given two sentences words1, words2 (each represented as an array of strings), and a list of similar word pairs pairs, determine if two sentences are similar.For example, great acting skills and.

Calculating String Similarity in Python by Dario Radečić

  1. Simple Question Answering (QA) Systems That Use Text Similarity Detection in Python = Previous post. Next post => Tags: NLP, In this scenario, QA systems are designed to be alert to text similarity and answer questions that are asked in natural language. But some also derive information from images to answer questions. For example, when you're clicking on image boxes to prove that you.
  2. 1. Tokenize documents into sentences. 2. Preprocess each sentence in the document. 3. Count key phrases and normalize them or produce TFIDF Matrix, you can also use any kind of vectorization such as spacy vectors. 4. Calculate the Jaccard Similarity between sentences and key phrases. 5. Rank the sentences with higher significance
  3. For sentence tokenization, we will use a preprocessing pipeline because sentence preprocessing using spaCy includes a tokenizer, a tagger, a parser and an entity recognizer that we need to access to correctly identify what's a sentence and what isn't. In the code below,spaCy tokenizes the text and creates a Doc object. This Doc object uses.
  4. Similarity interface With some standard Python magic we sort these similarities into descending order, and obtain the final answer to the query Human computer interaction : sims = sorted (enumerate (sims), key = lambda item:-item [1]) for doc_position, doc_score in sims: print (doc_score, documents [doc_position]) Out: 0.9984453 The EPS user interface management system 0.998093 Human.
  5. Sentence Similarity III. Hot Newest to Oldest Most Votes. New. C# solution with detailed comments. rwmcdonald created at: August 13, 2021 10:30 PM | No replies yet. 0. 9. Suffix and Prefix matching (runtime & space beats 100% & 82.86% resp.) c++ constant memory linear time. push_pop_top created at: August 2, 2021 9:11 PM | No replies yet. 0. 21. Find prefix and suffix of lists. python python 3.
  6. e how to load pre-trained models first, and then provide a tutorial for creating your own word embeddings using Gensim and the 20_newsgroups.
  7. g: I need to compare documents stored in a DB and come up with a similarity score between 0 and 1. The method I need to use has to be very simple. Implementing a vanilla version of n-grams (where it possible to define how many grams to use), along with a simple implementation of tf-idf and Cosine.

Semantic Similarity Using Transformers by Raymond Cheng

nltk-examples/short_sentence_similarity

GitHub - Huffon/sentence-similarity: This repository

Sentence similarity prediction to fit non-english translations in predefined templates. Compétences : Python En voir plus : sentence similarity python, sentence similarity deep learning, cosine similarity python, semantic similarity between sentences python github, infersent sentence similarity, cosine similarity, doc2vec sentence similarity, sentence similarity dataset, jpanese english. Similarity and Distance: We can extract similarity between words/sentences or documents using metrics like Cosine similarity, If haven't installed python yet, follow steps here. Assigning the raw data to variable text and printing it on console. text = In the mythical continent of Westeros, Nine noble families fight for control of the Seven Kingdoms. As conflict erupts in the kingdoms. spaCy is a free open-source library for Natural Language Processing in Python. It features NER, POS tagging, dependency parsing, word vectors and more Semantic similarity is the practical, widely used approach to address the natural language understanding issue in many core NLP tasks such as paraphrase identification, Question Answering, Natural Language Generation, and Intelligent Tutoring Systems. The full understanding approach, which is the other approach to language understanding, is desirable. However, because full language.

Sentence Similarity With BERT Towards Data Scienc

10+ loss-functions allowing to tune models specifically for semantic search, paraphrase mining, semantic similarity comparison, clustering, triplet loss, constrative loss. Performance . Our models are evaluated extensively on 15+ datasets including challening domains like Tweets, Reddit, emails. They achieve by far the best performance from all available sentence embedding methods. Further, we. The training is streamed, so ``sentences`` can be an iterable, reading input data from the disk or network on-the-fly, without loading your entire corpus into RAM.. Note the sentences iterable must be restartable (not just a generator), to allow the algorithm to stream over your dataset multiple times. For some examples of streamed iterables, see BrownCorpus, Text8Corpus or LineSentence Spacy is a natural language processing library for Python designed to have fast performance, and with word embedding models built in. Gensim is a topic modelling library for Python that provides modules for training Word2Vec and other word embedding algorithms, and allows using pre-trained models. This tutorial works with Python3 Any method to generate text using only keywords in GPT2? I am fine-tuning a GPT2 model in order to generate text. One approach I've found would work is to append the nouns at the start of the sentence and then pass it to the model for fine-tuning. The text would get shortened even more due to the max_len feature

Semantic Similarity in Sentences and BERT by Siddharth

Python Word Similarity using spaCy - GeeksforGeek

A Simple And Very Useful -WORD And SENTENCE- SimilarityFastText Word Embeddings for Text Classification with MLPWord Embedding Using BERT In Python | by Anirudh S