Matchcode Optimization:N-Gram

From Melissa Data Wiki
Jump to navigation Jump to search

← MatchUp Hub

Matchcode Optimization Navigation
Matchcode Optimization
First Component
Fuzzy Algorithms
Swap Matching
Blank Matching
Advanced Component Types
Algorithms
Accunear
Alphas
Consonants
Containment
Dice's Coefficient
Double Metaphone
Exact
Fast Near
Frequency
Frequency Near
Jaccard Similarity Coefficient
Jaro
Jaro-Winkler
Longest Common Substring (LCS)
MD Keyboard
Needleman-Wunsch
N-Gram
Numeric
Overlap Coefficient
Phonetex
Smith-Waterman-Gotoh
Soundex
UTF8 Near
Vowels


N-Gram

Specifics

Summary

Counts the number of common contiguous sub-strings (grams) between the two strings.

Returns

Percentage of similarity
MatchingGrams/(LongestLength – (NGRAMS - 1))
NGRAM is defined as the length of common strings this algorithm looks for. Matchup default I NGRAM = 2. For “ABCD” vs “GABCE”, Matching NGRAMS would be “AB” and “BC”.
MatchingGrams is the count of the number of matching grams.
LongestLength is the longer string length of the two strings being compared.

Example Matchcode Component

MCO Algorithm NGram.png

Example Data

STRING1 STRING2 RESULT
Johnson Jhnsn Unique
Neumon Pneumon Match Found
Beaumarchais Bumarchay Unique
Apco Oil Lube 170 Apco Oil Lube 342 Match Found



Performance
Slower Faster
Matches
More Matches Greater Accuracy


Recommended Usage

Hybrid deduper, where a single incoming record can quickly be evaluated independently against each record in an existing large master database.
Batch processes where NGRAM is set on a single non-first matchcode component.
Databases created with abbreviations or similar word substitutions.
Multi word field data where a trailing word does not appear in every record in the expected group or data contains acceptable variations of one of the keywords.

Not Recommended For

Databases where the number of errors with relation to the string length result is a small number of common substrings.

Do Not Use With

UTF-8 data. This algorithm was ported to MatchUp with the assumption that a character equals one byte, and therefore results may not be accurate if the data contains multi-byte characters.