I don't have a lot of knowledge about Artificial Intelligence. Just the very basic stuff.
Like in the video of the hide and seek https://www.youtube.com/watch?v=kopoLzvh5jY after a very big amount of repetitions the AI started to go crazy. From a point until another point it worked very nice and smooth and not weird. After that, it went a bit nuts. So a thing could be to know when to stop the learning for the AI and if it went a bit too far to be able to go back to the last state and stop the learn there and let it run forever on that state or start new learning with other parameters and another strategy.
So a thing could be discovering the limit of that AI and limited it. You do this thing in like real-life cases. For the simulations I think is good to simulate whatever, and you could discover amazing things. But for final products, services etc, to not fail you could limit the learning for the AI.
One real-life case is Tesla s autopilot. I think a good way is to have a very good trained AI on your car, and Tesla may have another learning machine which is real-time learning on their server. So once in a while, the AI for your car is updated, only after the developers know at which level the updates positively affect the overall computer behave. I think this is how Tesla is working.
But, having a real-time learning machine on your autopilot it's not a good idea at all. If something big fails or the AI is being hacked, all this could go horribly wrong. The most important is investing in security(not be able to crack or hack the system) for the safety of the people. Because in time the AI will be more and more secure on the road and maybe new pilots will be discovered, more powerful, and so on. But you gotta simulate a lot before you let a stage of the learning on the streets. So If they invest a lot in security and develop a lot on this and get something solid, then I don't see any risks. It's like in competitive video games, how can you stop cheaters ruin your game it's the most important thing. If there was a 100% or close to 100% way to find the cheaters and ban them, I'm sure this game would be the most popular there and would be amazing to watch streams of online tournaments without having this doubt, that someones cheating.
The risks are on the governments. If they apply AI for weapons, war strategy with nuclear involvements etc, it could go bad. I imagine like a group of drones same as a group of birds flying in big groups, going to the enemy's land and attack as the robots in The Avengers attacking the Stark Tower. These days it could be done easily, even if that the war is now on the financial side, allies, bio-war, influence, and other types of war.
If AI involves real-life things, for sure, the learning got to be limited and when added more AI data, it got to be very well tested. But to control this is difficult, but it needs to be done. If there are bloody-minded people, who can program an AI to learn things in a bad manner and not respecting some moral basics, it could be much worse than an unintentional bad AI evolution. That's why I think it always got to have a limit of learning and do a lot of testing before putting such results in real life.
At a very big scale, AI could be equally helpful, and equally harmful for humanity. The thing is to be sure we can control this and can shut it down with no big or important repercussions
No comments:
Post a Comment