Noise in the Machine
Jan. 6th, 2016 10:58 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Let's talk noise.
I've been wracking my brain for the past few months wondering how to find patterns within noise. This, for AI programming, is probably one of the most difficult questions I've pondered without ever having actually programmed a computer to detect patterns. I wanted to see how much of it I could reason out in my mind through introspection before attempting the pattern recognition studies I'd like to pursue.
I kept imagining ongoing sets of noise inputs and wondering what separates those inputs from an input that has, what we as adult humans would call, some form of structure (read: pattern) that could be detected. How is one still image of "noise" different from a still image of "a pattern"? How do our eyes look out upon a visual field and not just constantly see various degrees of noise?
Yeah, yeah. I'm quite aware that studying things like Bayes' formulas will shed light on things, but again, my goal was to be able to reason it out BEFORE actually looking into how some programmers and mathematicians have already figured it out.
With all that being said, I'm not one bit closer to my answer. However, this little experiment seems to be giving me better insight into my dilemma. I've always been curious about the fusiform gyrus and its role in prosopagnosia, the inability to detect/recognize faces. A person with prosopagnosia will only see an apple, eggs, olives, a tomato, and (...what is that?...red spaghetti?), while the rest of us will see Kermit the Frog and Animal. (Looking at such images, I really like this one for some reason - I think because of its bold colors - and Wow, that's impressive.
The thing is, a person with prosopagnosia doesn't have the face template to then make comparisons (or so, that's how we think of the disorder so far), much like how none of the 7,000+ noise images don't have at least something resembling a face image in them to then provide a basis of comparison. That seems a bit ass-backwards, doesn't it? But the truth is, we have evolved to detect patterns because the world already has patterns in it. To understand pattern recognition at a very base level, you'd have to go back to when we were still nothing more than single-cell organisms not seeing patterns and then understanding how we developed from that point forward. The patterns are what drove our DNA engineering, not the other way around.
(Thanks, lolotehe, for the article.)
I've been wracking my brain for the past few months wondering how to find patterns within noise. This, for AI programming, is probably one of the most difficult questions I've pondered without ever having actually programmed a computer to detect patterns. I wanted to see how much of it I could reason out in my mind through introspection before attempting the pattern recognition studies I'd like to pursue.
I kept imagining ongoing sets of noise inputs and wondering what separates those inputs from an input that has, what we as adult humans would call, some form of structure (read: pattern) that could be detected. How is one still image of "noise" different from a still image of "a pattern"? How do our eyes look out upon a visual field and not just constantly see various degrees of noise?
Yeah, yeah. I'm quite aware that studying things like Bayes' formulas will shed light on things, but again, my goal was to be able to reason it out BEFORE actually looking into how some programmers and mathematicians have already figured it out.
With all that being said, I'm not one bit closer to my answer. However, this little experiment seems to be giving me better insight into my dilemma. I've always been curious about the fusiform gyrus and its role in prosopagnosia, the inability to detect/recognize faces. A person with prosopagnosia will only see an apple, eggs, olives, a tomato, and (...what is that?...red spaghetti?), while the rest of us will see Kermit the Frog and Animal. (Looking at such images, I really like this one for some reason - I think because of its bold colors - and Wow, that's impressive.
The thing is, a person with prosopagnosia doesn't have the face template to then make comparisons (or so, that's how we think of the disorder so far), much like how none of the 7,000+ noise images don't have at least something resembling a face image in them to then provide a basis of comparison. That seems a bit ass-backwards, doesn't it? But the truth is, we have evolved to detect patterns because the world already has patterns in it. To understand pattern recognition at a very base level, you'd have to go back to when we were still nothing more than single-cell organisms not seeing patterns and then understanding how we developed from that point forward. The patterns are what drove our DNA engineering, not the other way around.
(Thanks, lolotehe, for the article.)