Difference between revisions of "Matchcode Optimization:Fast Near"

From Melissa Data Wiki
Jump to navigation Jump to search
 
Line 5: Line 5:
 
==Fast Near==
 
==Fast Near==
 
===Specifics===
 
===Specifics===
Fast Near is a Melissa Data algorithm loosely based on the Levenshtein Distance Algorithm, which returns the distance between two strings, where distance is defined as the count of the number of incorrect characters, insertions, and deletions.
+
:Fast Near is a Melissa Data algorithm loosely based on the Levenshtein Distance Algorithm, which returns the distance between two strings, where distance is defined as the count of the number of incorrect characters, insertions, and deletions.
  
 
===Summary===
 
===Summary===
A typographical matching algorithm, Fast Near works best in matching words that don't match because of a few typographical errors. The user specifies (on a scale from 1 to 4, with 1 being the tightest) the degree of similarity between data being matched. The scale is then used as a weight which is adjusted on the length of the strings being. The Fast Near algorithm is a speedy approximation of the Accurate Near algorithm.  
+
:A typographical matching algorithm, Fast Near works best in matching words that don't match because of a few typographical errors. The user specifies (on a scale from 1 to 4, with 1 being the tightest) the degree of similarity between data being matched. The scale is then used as a weight which is adjusted on the length of the strings being. The Fast Near algorithm is a speedy approximation of the Accurate Near algorithm.  
  
 
===Returns===
 
===Returns===
Boolean ‘match or no match’ based on whether the compared data has less than an adjusted number of differences (or more).
+
:Boolean ‘match or no match’ based on whether the compared data has less than an adjusted number of differences (or more).
  
 
===Example Matchcode Component===
 
===Example Matchcode Component===
Line 20: Line 20:
 
|AdditionalRows=
 
|AdditionalRows=
 
{{EDTRow|White|Johnson|Jhnsn|Unique}}
 
{{EDTRow|White|Johnson|Jhnsn|Unique}}
{{EDTRow|White|Maguire|Mcguire|Match Found}}
+
{{EDTRow|Green|Maguire|Mcguire|Match Found}}
 
{{EDTRow|Green|Deanardo|Dinardio|Match Found}}
 
{{EDTRow|Green|Deanardo|Dinardio|Match Found}}
{{EDTRow|Green|34-678 Core|34-678 Reactor|Unique}}
+
{{EDTRow|White|34-678 Core|34-678 Reactor|Unique}}
 
}}
 
}}
  
Line 34: Line 34:
  
 
===Recommended Usage===
 
===Recommended Usage===
Batch processing: this is a fast algorithm which will identify a greater percentage of duplicates found than other algorithms, but since it is more basic in its routine, sometimes Fast Near will find false matches or miss true matches.
+
:Batch processing: this is a fast algorithm which will identify a greater percentage of duplicates found than other algorithms, but since it is more basic in its routine, sometimes Fast Near will find false matches or miss true matches.
  
 
===Not Recommended For===
 
===Not Recommended For===
Gather/scatter, survivorship, or record consolidation of sensitive data.
+
:Gather/scatter, survivorship, or record consolidation of sensitive data.
  
Quantifiable data or records with proprietary keywords not associated in our knowledgebase tables.
+
:Quantifiable data or records with proprietary keywords not associated in our knowledgebase tables.
  
 
===Do Not Use With===
 
===Do Not Use With===
UTF-8 data. This algorithm was ported to MatchUp with the assumption that a character equals one byte, and therefore results may not be accurate if the data contains multi-byte characters.
+
:UTF-8 data. This algorithm was ported to MatchUp with the assumption that a character equals one byte, and therefore results may not be accurate if the data contains multi-byte characters.
  
  
 
[[Category:MatchUp Hub]]
 
[[Category:MatchUp Hub]]
 
[[Category:Matchcode Optimization]]
 
[[Category:Matchcode Optimization]]

Latest revision as of 23:13, 26 September 2018

← MatchUp Hub

Matchcode Optimization Navigation
Matchcode Optimization
First Component
Fuzzy Algorithms
Swap Matching
Blank Matching
Advanced Component Types
Algorithms
Accunear
Alphas
Consonants
Containment
Dice's Coefficient
Double Metaphone
Exact
Fast Near
Frequency
Frequency Near
Jaccard Similarity Coefficient
Jaro
Jaro-Winkler
Longest Common Substring (LCS)
MD Keyboard
Needleman-Wunsch
N-Gram
Numeric
Overlap Coefficient
Phonetex
Smith-Waterman-Gotoh
Soundex
UTF8 Near
Vowels


Fast Near

Specifics

Fast Near is a Melissa Data algorithm loosely based on the Levenshtein Distance Algorithm, which returns the distance between two strings, where distance is defined as the count of the number of incorrect characters, insertions, and deletions.

Summary

A typographical matching algorithm, Fast Near works best in matching words that don't match because of a few typographical errors. The user specifies (on a scale from 1 to 4, with 1 being the tightest) the degree of similarity between data being matched. The scale is then used as a weight which is adjusted on the length of the strings being. The Fast Near algorithm is a speedy approximation of the Accurate Near algorithm.

Returns

Boolean ‘match or no match’ based on whether the compared data has less than an adjusted number of differences (or more).

Example Matchcode Component

MCO Algorithm FastNear.png

Example Data

STRING1 STRING2 RESULT
Johnson Jhnsn Unique
Maguire Mcguire Match Found
Deanardo Dinardio Match Found
34-678 Core 34-678 Reactor Unique



Performance
Slower Faster
Matches
More Matches Greater Accuracy


Recommended Usage

Batch processing: this is a fast algorithm which will identify a greater percentage of duplicates found than other algorithms, but since it is more basic in its routine, sometimes Fast Near will find false matches or miss true matches.

Not Recommended For

Gather/scatter, survivorship, or record consolidation of sensitive data.
Quantifiable data or records with proprietary keywords not associated in our knowledgebase tables.

Do Not Use With

UTF-8 data. This algorithm was ported to MatchUp with the assumption that a character equals one byte, and therefore results may not be accurate if the data contains multi-byte characters.