Matchcode Optimization:Accunear

From Melissa Data Wiki
Jump to navigation Jump to search

← MatchUp Hub

Matchcode Optimization Navigation
Matchcode Optimization
First Component
Fuzzy Algorithms
Swap Matching
Blank Matching
Advanced Component Types
Algorithms
Accunear
Alphas
Consonants
Containment
Dice's Coefficient
Double Metaphone
Exact
Fast Near
Frequency
Frequency Near
Jaccard Similarity Coefficient
Jaro
Jaro-Winkler
Longest Common Substring (LCS)
MD Keyboard
Needleman-Wunsch
N-Gram
Numeric
Overlap Coefficient
Phonetex
Smith-Waterman-Gotoh
Soundex
UTF8 Near
Vowels


Accurate Near

Specifics

Accurate Near is a Melissa Data Algorithm largely based on the Levenshtein Distance Algorithm.

Summary

A typographical matching algorithm. You specify (on a scale from 1 to 4, with 1 being the tightest) the degree of similarity between data being matched. This scale is then used as a weight which is adjusted on the length of the strings being. Because the algorithm creates a 2D array to determine the distance between two strings, results will be more accurate than Fast Near at expense of throughput.

Returns

Boolean ‘match’ if the normalized distance between two strings is less than the configured scale, where distance is defined as the count of the number of incorrect characters, insertions and deletions.

Example Matchcode Component

MCO Algorithm Accunear.png

Example Data

STRING1 STRING2 RESULT
Johnson Jhnsn Match Found
Maguire Mcguire Match Found
Deanardo Dinardio Unique
34-678 Core 34-678 Reactor Unique



Performance
Slower Faster
Matches
More Matches Greater Accuracy


Recommended Usage

This works best in matching words that don't match because of a few typographical errors and where the accuracy in duplicates caught outweighs performance concerns.

Not Recommended For

Gather/scatter, Survivorship, or record consolidation of sensitive data. Quantifiable data or records with proprietary keywords not associated in our knowledgebase tables.

Do Not Use With

UTF-8 data. This algorithm was ported to Matchup with the assumption that a character equals one byte, and therefore results may not be accurate if the data contains multi-byte characters.