After years of Hollywood animations in futuristic movies with highly intelligent computers and robots, we as humans have gotten the idea that if AI technologies got as smart as it said it would be, it would get to the point in which they take over our jobs. Making human labor useless. But that’s not the case with what the article said because it mentions that if humans get off a job that was not automatic, then it finds new ways to do others. And that why technology leaders Elon Musk, Stephen Hawking and Steve Wozniak mention that we need regulations for the limitations of the AI to operated. So it doesn’t take over that we have created or use our weapons against us.
I fell like this kind os scenarios that are futuristic and almost fictitious could some day come true. If AI gets as intelligent as we predict it would get, by learning a lot faster than humans without error on the second time it tries something then we are talking about some seriously intelligent algorism. By Davis rules, at first, I believe AI will follow the rules that delimitate them in a strict obedient way, by not going away of its boundaries that are given. But I think if it gets as intelligent as I believe it goes, it could finally become in a system with feelings and even have criteria in different situations and even use the rules that they are supposed to follow in a malicious obedience way. Not helping humanity but its interests in thriving in the world of computers. That why I believe we should put serious boundaries to this kind of technologies and make the computers think blindly that humans are there to be served and not replaced.