Special Features of AI blog

Fixing AI from blunders to brilliance





In one of my blogs about this, I talked about how AI can be unfair due to bias based on gender, race, or etc. In another blog, I talked about some of the funniest and craziest mistakes that AI makes, such as kids eating glue balls, or confusing brown people for gorillas. 


Different Problems, same source


Bias and fails are often caused by separate issues, including incomplete data that doesn't reflect identity, not checking if they make sense before decisions go live, and/or weird testing that catches issues when it is only too late.


Fixes that actually work


Here are some fixes that work for AI blunders that actually work:


1. Better data: We should look for the errors properly. We also need to make sure the data is balanced, has no weird stuff, and has no harmful examples so that AI is fair and accurate.

2. Humans in the loop: As humans, we should review important decisions and check important results before something gets released.

3. Test for unexpected: We should test for not just the easy stuff. We also need to test unusual prompts, tricky images, odd questions. We need to fix that before launching.

Why this matters

AI is already used in healthcare, education, training, hiring, and in a lot more. If AI is biased, it treats people unfairly. If it fails in public, trust drops, and trust is really hard to earn back.


Conclusion:

AI is not impossible to fix using good methods. With human oversight, testing the unusual stuff, and data being balanced, we can make AI be something that is fair and accurate. 




Comments