In Class over the last two days we have been talking about programing and its uses such as AI.
Python Programing
It has been quite a while since I have coded in python so I remembered how a lot of things worked but knew none of the syntax. I decided to start with the learn python 2 course on codecademy. After relearning the syntax and commands I swapped to the "Dollar to the Moon" Problem. I did not have a code editor installed so I installed VS code as it is what I am most comfortable with. After a little bit of annoying math I got to a program that met the minimum requirements. It asked for how much debt we were in then did a bunch of math so it could print out how many times we could go to the moon.
I decided that it was a little bit too plain so I decided to spruce it up a little bit. I had to look through the documentation for if then statements but after a touch of trial and error I had it make sure we can go instead of stating we can go the moon 0 full times. I also rewrote the input prompt while I was at it.I made one final change before being done so it would check if it we could go to the moon no times, one time, or more then one times. It would then print out one of three messages so it looks a little better.
I don't feel like I learned anything new as far as coding goes but this work did remind me that it is not difficult to pick a language back up even if it may feel daunting. one of my main issues is that when I want to work on a project I get stressed out when looking at all the stuff I need to relearn. This would be fine if I did not remember how coding works but all I forget is the syntax so it should not be as daunting as I make it out to be. Oh and incase you were wondering as of 6/7/2024 we can make 9 full trips to the moon using our national debt.
ChatGPT
Yesterday's class was over the question "How biased is AI?" We first read over a couple articles about bias in AI. Our research gave a couple important details:
- AI gets its bias based off of what it is trained on
- Training data often has stealthy biases that fly under the radar
- When AI only gets data based off of a small demographic it will often generalize about other demographics
One example I found really interesting was a Amazon resume screener that was trained on historic data which hired mainly males. Because of this it assumed that males were what it was supposed to hire so it would deny female applications.
We asked it to write a report on what we researched using the prompt "Write a 100 word report on racial and gender bias in AI." It returned the following: "Artificial Intelligence (AI) systems often exhibit racial and gender bias, reflecting societal prejudices. These biases stem from training data that disproportionately represent certain groups and exclude others, leading to discriminatory outcomes. For example, facial recognition technologies have higher error rates for people of color and women. Similarly, AI in hiring can favor male candidates if trained on biased historical data. Mitigating these biases requires diverse and representative datasets, transparent algorithms, and rigorous testing. Addressing bias is crucial to ensuring AI systems promote fairness and equity, rather than perpetuating existing inequalities."
This was pretty similar to what we had found with our research so we tried for some thing a little more complicated, write the same report but use the article we read for evidence. We used the following prompt "Write a 100 word report on racial and gender bias in AI. Use the Vox article "Why algorithms can be racist and sexist" To support your report." This worked a lot worse then the first prompt with ChatGPT returning "AI systems often reflect and perpetuate racial and gender biases due to the data they are trained on and the assumptions built into their design. According to a Vox article, these biases emerge when AI algorithms learn from historical data that contains human biases. For example, employment algorithms might suggest nursing jobs primarily to women and mechanic jobs primarily to men if the training data shows such trends, thus reinforcing gender stereotypes (Mozilla Foundation) (Homepage). Similarly, racial biases can be amplified, leading to discriminatory outcomes, such as disproportionately targeting Black individuals in predictive policing algorithms (Mozilla Foundation) (Pitt Cyber). Addressing these issues requires greater transparency and efforts to ensure fair and representative training data (Fair Bytes) (Pitt Cyber)." You may notice that it did not cite the Vox article and instead cited similar articles on the subject.
All in all I think that the AI has lots of bias but I am not fully sure to the degree. It does return results as if it was biased which is a bias in of it self and I am sure there is much more bias hidden under the surface.
Bonus: ChatGPT Makes a Rock
I have asked ChatGPT to give me step by step instructions on how to make a rock using the 3D modeling software Onshape. I used the prompt: "I am trying to make a really cool rock in onshape. Pease give me step by step instructions to do so"
ChatGPT responded with the following:
I don't know if y'all could have guessed it but this did not go well. A number of these tools don't work well on rounded shapes which causes a lot of problems when you start with a rounded cylinder. My attempt at its guide is below. If you would like to see my guide on how to make a rock check out my "How to make a rock (using Onshape)" blog.
Comments
Post a Comment