Not posted in political forum because this was just interesting in general and can be discussed in a work format. Mods feel free to move.
Dont know how many other software guys we have, but our VP sends us a class we all have to take for AI, and about 10% was real AI stuff (scratching the surface of terminology) the rest was about how we shouldn't let AI learn biases.
Well yes correct, we want data to be correct. As it's possible to feed data in to models manipulate them. But biases is an automatic red flag, so I keep going.
Every bias listed is a statistical fact or a phenomena of the data being used, like being upset that if a little girl searches for CEOs, they'll mostly find men (again, a limited dataset that statistically is small enough and less likely to diversify by nature of the exclusive definition of the data). Goes on to list how horrible this is, and how essentially we need to fudge the data until it matches what we want it to match, not what it really says. Even though that girl can also just search girl CEOs, etc.
This is the stuff that will slowly work it's way into technology like facial recognition, identifying threats, allowing/disallowing loans, etc..
I mention those things because they we are all case studies the class used where the factual statistics on huge data sets dont match the narrative they like, so they change the data. Ya know, because everything in the world might say that person really shouldn't be allowed to have a loan if they cant afford it, but we dont want that to be the case.
Didn't kill a deer this year, so these are my after season ramblings.
Dont know how many other software guys we have, but our VP sends us a class we all have to take for AI, and about 10% was real AI stuff (scratching the surface of terminology) the rest was about how we shouldn't let AI learn biases.
Well yes correct, we want data to be correct. As it's possible to feed data in to models manipulate them. But biases is an automatic red flag, so I keep going.
Every bias listed is a statistical fact or a phenomena of the data being used, like being upset that if a little girl searches for CEOs, they'll mostly find men (again, a limited dataset that statistically is small enough and less likely to diversify by nature of the exclusive definition of the data). Goes on to list how horrible this is, and how essentially we need to fudge the data until it matches what we want it to match, not what it really says. Even though that girl can also just search girl CEOs, etc.
This is the stuff that will slowly work it's way into technology like facial recognition, identifying threats, allowing/disallowing loans, etc..
I mention those things because they we are all case studies the class used where the factual statistics on huge data sets dont match the narrative they like, so they change the data. Ya know, because everything in the world might say that person really shouldn't be allowed to have a loan if they cant afford it, but we dont want that to be the case.
Didn't kill a deer this year, so these are my after season ramblings.
Comment