I have developed a simple and fast algorithm in PHP to compare images for equality.
Its intensity (compared to 800x600 images per minute ~ 40 per second per thousand) and an unexpected search algorithm compared to others (3 / second) compared to each one in 22 minutes through 3,000 images. You can.
The basic overview you get is image, resize it in 8x8 and then convert those pixels to HSV. Hue, saturation and value are then reduced to 4 bits and it is a big Hex string is created.
Comparing the images basically moves with two strings, and then the difference is found. If the total is less than 64 then it has the same image. Different images are usually around 600 - 800. Down 20 and very similar.
Is there any improvement on this model that I can use? I have seen how different components (colors, saturation and value) are relevant to comparison. Hugh is probably quite important but others?
In order to speed up the searches, I might split 4 bits from each part in half, and put the most important bits first so that they can fail the check, then LSB does not do all the Should be tested, I do not know the efficient way to store those bits, yet it allows them to easily search and compare.
I am using a dataset of 3,000 photos (mostly exclusive) and there is no false positive in it. Its completely immune and immensely resistant to brightness and vice versa.
What do you want to use:
The hash method that you applied has a lot to try, but you should work fine :)
Important steps to make this fast is to haveh to your toes You convert your values into a uniform representation and then take a random subset of bits in the form of a new hash. Do 20-50 random samples and you get 20-50 hash tables. If a feature matches 2 or more of those 50 hash tables, then the feature will be similar to the one you have already stored, it allows you to change the AB (XI)
Hope this will be helpful, if you want to try your self-developed image parallelism, then leave me a mail on
Comments
Post a Comment